SciELO - Scientific Electronic Library Online

 
vol.7 número3 índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • En proceso de indezaciónCitado por Google
  • No hay articulos similaresSimilares en SciELO
  • En proceso de indezaciónSimilares en Google

Compartir


Educación Médica

versión impresa ISSN 1575-1813

Educ. méd. vol.7 no.3  jul./sep. 2004

 

assessment and evaluation


Associate Professor
Keywords: Clinical Performance Assessment, Rubric,
Authors: Kwon, Hyungkyu; Lee, Giljae; Lee, Eunjung
Institution: Kyungsung University (Kwon, Lee Giljae) KAIST (Lee, Eunjung)

Summary: Web-based Clinical Performance Assessment Model Development Kwon, HyungKyu Lee, KilJaeLee, Lee EunJung. The clinical performance assessment using standardized patients is emphasized for measuring and evaluating clinical practice and performance capabilities of students objectively. However, it lacks acceptable standardized criteria for performance assessment and has the problems of validity, reliability, and fairness due to the differences of various evaluators and evaluation institutions. This research utilizes the rubric, the evaluation criterion for clarifying the level of outcomes in the process of performance assessment. Through the rubric, instructors can decide the performance standard for student's performance based on the data from learning outcomes and can obtain the guidelines for what to evaluate and how to score. And learners can get not only the role of self monitoring for study but also the motivation to achieve the goal. Web-based clinical performance assessment model clarifies objectives(skills, attitudes) for prepared problems and clarifies interaction roles of patients and students. The clarified interaction roles are practiced applying the modes of performance assessment for the evaluation criterion and evaluation method. The automatic/manual scoring is done based on the rubric. Also, the tool for the production and use of virtual standard patient is supported on the web environment. Rubrics for the produced standard patients and clinical performance assessment are accumulated in database and can be used in various synchronous/asynchronous clinical performance assessment. Learners can experience many clinical skills under various conditions and circumstances through clinical performance assessment.

 

Developing and Validating an Objective Structured Clinical Examination Station to Assess Evidence-Based Medicine Skills
Keywords: Evidence-based medicine
Authors: Gruppen, LD; Frohna, JG; Mangrulkar, RS; Fliegel, JE
Institution: University of Michigan

Summary: Objectives: Skills in Evidence-Based Medicine (EBM) have been identified by numerous medical education organizations as required competencies for students and residents. Although some tools for assessing EBM knowledge exist, there are few tools that assess competence in EBM performance. We have developed a computer-based objective structured clinical exam (OSCE) station to assess the student EBM skills and to evaluate the effects of curricular changes.

Methods: The web-based case requires students to read a clinical scenario and then 1) ASK a specific clinical question using the Population/Intervention/Comparison/Outcome (PICO) framework, 2) generate appropriate terms for a SEARCH of the literature, and 3) SELECT and justify the most relevant of three provided abstracts to answer the clinical question.

Scores are computed for each of the three sections and overall.

Results: Two cohorts of third-year medical students were compared. The 2002 cohort had a minimal EBM curriculum whereas the 2003 cohort had an expanded, longitudinal EBM curriculum. Our assessment documented statistically and pragmatically significant effects.

Item

Class of 2002 Class of 2003 Effect Size

(N=140) (N=157)

ASK

22.7 26.0a 0.59

SEARCH

13.7 15.3a 0.52

SELECT Abstract

22.3 23.4 0.10

Total Score

58.7 64.8a 0.46

% passing all three parts

29% 53%a 0.48

a p<0.01

Conclusions: Using this validated methodology, we were able to document a significant change in performance in two of three skills on the EBM station. We attribute this improvement to the changes made in our curriculum. This EBM assessment tool has also been used for first year residents and is being evaluated currently in a multi-institutional validation study.

Assessment of academic staff evaluation program
Keywords: assessment -faculty member-program of evaluation
Authors: Rahimi, B.Zarghami N
Institution: Oromiyeh University of medical science Educational development center.

Summary: Background: The teaching capability of academic staff has a significant relationship with their awareness of the educational process and the evaluation program. It is necessary that academic staff are aware of their own teaching capability and are able to improve continuously the quality of their practice.

Aim: To determine an evaluation program for academic staff.

Summary of work: The subjects of this analytical descriptive study include 70 of 150 academic staff of Urmia University of Medical Sciences who responded to questionnaires. Initially a questionnaire was prepared, containing closed and open ended questions about the evaluation process. To increase the reliability and validity of the questionnaire, it was piloted first. It was distributed and then collected by the researchers.

Summary of results: The findings of this study revealed that 64% of academic staff was male and 36% was female. 35.65% indicated no knowledge of an existing evaluation process during teaching. 44.33% indicated lack of commitment for implementation of an evaluation process and 47.19% indicated lack of commitment of the authorities and disadvantages of evaluation. 63.5% of academic staff agreed to be evaluated at the end of courses and 70% agreed to take part in educational workshops as a feedback system.

Conclusion: It is speculated that evaluation could improve teaching skills.

 

Explicit transferable skills teaching: does this affect student attitudes or performance in the first year at Medical School?
Keywords: Transferable skills, attitudes, performance
Authors: Whittle, S. R. & Murdoch-Eaton, D.G
Institution: University of Leeds

Summary: Recent changes in UK school curricula have introduced optional Key Skills units, leading to qualifications in Communication, IT and Use of Number. These are designed to teach transferable skills in the context of students' A level courses. Approximately 20% of undergraduate intake at Leeds University Medical School possess some of these qualifications. This study was designed to detect differences between students with and without explicit skills qualifications. Students completed a questionnaire on arrival which asked how often they had practised a range of 31 transferable skills in the previous 2 years, and how confident they felt about their abilities in these. Studies are underway to determine any differences in performance between the two groups in medical course components with clear transferable skill objectives. Questionnaires were completed by 478 students (99 with Key Skills qualifications, 279 without). Students with Key Skills qualifications felt that they had received more opportunities to practise information handling (p=0.01) and IT skills (p=0.02). They also felt more confident in these skills (information handling p=0.04, IT skills p<0.001). Limited evidence suggests that they rated their technical/numeracy skills more highly (p=0.06). Students who have received specific skills teaching demonstrate improved confidence in some skills, and there appears to be a positive relationship between confidence and opportunities to practise these skills. Initial results from an essay writing module however, suggest that students with key skills qualifications do not perform better. Later performance however may show differences. Should Medical Schools encourage students to achieve these qualifications?

 

The Use of Video to Evaluate Clinical Skills in Paediatrics
Keywords: video, OSCE, assessment, paediatrics
Authors: Round, j.
Institution: St. George's hospital medical school

Summary: Background: Objectively testing examination skills in paediatrics raises unique problems. Using general paediatric cases (as seen by GP's or non-specialists) is difficult as signs rapidly change and disappear. Children rapidly become tired, disinterested or non-compliant so the usefulness of a particular station alters during the exam. Lastly children require feeding and naps, which will not fit an exam schedule. Video has been used in examination of psychiatric patients and communication skills. To increase reliability and face validity of paediatric examination stations, video stations of children with visible signs were developed. This abstract details their content and usefulness.

Methods: Video stations used in a large (n=186) clinical OSCE for undergraduates, results of which are below. Stations consisted of 60-90 seconds of edited footage of children with acute problems (bronchiolitis, croup) or undergoing developmental assessment. With the station was an instruction sheet and written questions. Performance was compared with overall performance. Student opinions were obtained at interview.

Results: Students score a mean of 13.2/20 (SD 2.1) in the video station (OSCE mean 14.2, SD 2.8). Performance was well correlated to the overall OSCE result (r=0.32) with the mean correlation of each station being r=0.38. Students felt that the station was fair although many confessed to a temporary shock at the new assessment method.

Discussion: This video station compares well with other forms of examination assessment in paediatrics. Its quality may be increased as students become more familiar with this type of station.

 

QFD and continuing medical education
Keywords: QFD matrix , continuing medical education
Authors: Ruiz de Adana Perez R. Agrait Garcia P. Carrasco Gonzalez I. Duro Martinez JC. Rodriguez Vallejo J M.
Millan Nuñez Cortes J
Institution: Agencia Lain Entralgo

Summary: Quality functional deployment is a way to listen the customers and understand exactly what they are waiting and using a deductional (deductive) system,to find which is the best way to satisfy customer needs with disponible resources. QFD is a process design methodology for guarantee that customer voice is listened during the planning ,design and implementation of the product or service: listening, understanding, acting and translating what the customer tells us is the philosophical heart of QFD

Objectives: To implement the planning of the continuing medical education matrix in Laín Entralgo agency based on quality function deployment model. To analize the matrix QFD analysis identifying and prioritizing the opportunities in the process of continuing medical education

Methods and persons: A working group composed by 6 experts in planning continuing medical education implement QFD (quality function deployment) matrix identifying and analyzing the following segments : requirements of the customer (Which?).

Characteristics of the process´ activities. (How?). The matrix relationship between which¨¨ and how¨¨. Competitive evaluation. Objectives of the processes activities. (¿How much?). Compliance evaluation of the processes characteristics. Technical and relative importance of every process activity.

Results: We show the QFD matriz of the continuing medical education process using the student requirements. The analysis of the QFD matrix identifies possibilities of improvement in the following activities of the continuing medical education process : identification of organizational needs, identification of professional expectatives (needs),making process of continuing medical education plan, design of educational courses, teachers selection, schedule and courses acreditation

 

Outcome of quality assessment of a cardiology residency as a result of joint brainwork of graduates and their present medical chiefs
Keywords: quality, program evaluation post-graduate
Authors: Alves de Lima, A., Terecelan, A., Nau, G., Botto, F., Trivi, M., Thierer, J., Belardi J
Institution: Instituto Cardiovascular de Buenos Aires

Summary: Experiences acquired by residents during residency programs (RP) do not always assure success in the working field.

Objectives:

a. to find out what graduates perceive regarding their degree of training acquired after residency period

b. to correlate the opinion of both Graduates and their present medical chief (PMC)

c. to determine whether graduates and PMC perceive existence of re-adaptation period (RAP) after conclusion of RP.

Method: the study was carried out in a University Hospital in Buenos Aires in 2003. All the G, graduated between 1998 and 2001. The G should identify their PMC. The PMC constituted the doctor responsible for the G for > 60% of his weekly working hours during the study period. Data was obtained through an 8-question survey. A qualitative and quantitative analysis ( CA) were carried. The Wilcoxon test was used for the CA.

Results: 15 G (100%) and 13 PMC were included. G showed great satisfaction towards received training during RP. In-patient care areas were specially identified. PMC judged the G as highly competent, particularly on in-patient care areas regarding counseling skills and overall clinical competence. Regarding RAP, 13 G and 8 PMA considered that it exists and that it lasts 385(±6) vs. 344(±5) days (p=NS)

Conclusion: graduates expressed high satisfaction on their preparation and medical chiefs on their performance. Most of the participants considered that there is a re-adaptation period after residency. The present data provides evidence of the effectiveness of a program aimed at preparing doctors for medical practice.

 

Facilitating PPd using a learning portfolio:experience in a new UK medical school
Keywords: professionalism, reflection, protfolio, undergraduate medical education,assessment
Authors: Roberts, JH.
Institution: Phase 1 Medicine, University of Durham, Stockton campus, University boulevard, Thornaby, Stockton-on-Tees, TS17 6BH

Summary: Purpose: To describe the process of using a reflective Learning Portfolio to assess second year medical students' personal and professional development (PPD) in a new UK medical school.

Methods: PPD at Durham covers ethics, communication skills, evidence based medicine, self care and clinical contexts of care. As part of their formative assessment, students were required to keep a learning portfolio for eighteen months exploring their development in these areas, supported by five prompts: Initial motivation for, and early impressions, of medicine, learning needs and achievements, 'critical incidents' and links between PPD and the wider curriculum. The portfolios were assessed by a PPD tutor and allocated a provisional mark, according to the evidence of reflection throughout the portfolio. This mark was confirmed after a 30 minute interview with the assessing tutor and student. The interview was an opportunity for tutors to give substantial and individual feedback to the student. Tutors were given training to guide them in the marking and conduct of the interview.

Results: Tutor and student inexperience combined to produce anxiety about the assignment and some reluctance to seek help. One finding was that students largely used the portfolio as a cathartic exercise which raised issues with confidentiality.

Conclusion: There is a fine balance between encouraging students to determine the content of their own portfolio and the need for clear criteria for assessment purposes. Tutors' enthusiasm and preparation for the activity are also pivotal in securing its success. We have responded to students and tutor feedback by shortening the length of the assignment and providing more support for tutors and students.

 

The evaluation of a medical curriculum: using the methods of programme evaluation to align the planned with the practised curriculum.
Keywords: curriculum evaluation, quality assurance, medical curriculum, programme evaluation
Authors: Wasserman, E.
Institution: University of Stellenbosch, Republic of South Africa

Summary: Background: The current focus on the quality assurance of higher education in general and medical education in particular creates a need for a practical but methodologically sound approach to curriculum evaluation. This presentation describes an approach to curriculum evaluation in medical education based on programme evaluation methods used in the social sciences.

Aims & objectives: The aim of the presentation is to explain how the evaluation of a curriculum can be undertaken on the basis of the methodology of social scientific programme evaluation. The curriculum of the medical programme offered at the Faculty of Health Science of the University of Stellenbosch since 1999 is used as a case study to illustrate this approach.

Methods: Clarificatory evaluation is used to assess the planning of a curriculum (the planned curriculum). A Logic Model is constructed as a product of this clarification evaluation.

Results: Aspects of a Logic Model that is the product of the process of clarification evaluation of the medical programme offered at the Faculty of Health Science of the University of Stellenbosh will be presented to illustrate this approach.

Discussion and conclusions: Curriculum evaluation is an important component of the process of quality assurance. Aligning the planned and the practised; curriculum as an approach to the quality assurance of a curriculum can be applied to any of the four types of academic reviews described by Trow. The approach described here is consistent with the definition of quality as fitness for purpose.

 

Assessment of educational program quality in Tehran University of medical sciences and health services, according to the referendum from the graduates
Keywords:
assessment, education, graduate
Authors: Farzianpour, F.
Institution: School of public health Tehran university of medical sciences and Educational development center

Summary: Introduction: Educational program quality assessment in university level is to determine:

1- The degree and extent of foreseen objectives for university fulfillment, and also.

2- The strength and weakness points of these assigned objectives. Educational program quality assessment is one of the most significant duties of universities of medical sciences. On the other hand, occupational capacity, ability and efficiency of medical graduates in order to offer the best health and treatment services, and to provide individual and social health, mostly depends on provision and fulfillment of the above-mentioned objectives. In case, the educational programs are not well designed and well-performed, there will be harmful cultural, social and economical effects, imposed to people, the graduates and also university credence and management. The general objective of this study is to improve educational program quality and to promote education in a university level.

Special objectives are:

1- To determine the total average scores of the graduates.

2- Distribution of age and gender.

3- Satisfaction.

4- Strength and weakness points.

5- Finally Educational problems of the graduates.

The survey method has been analytic-descriptive and the comunity under the study is about 178 graduates from medical faculty. All the data has been analyzed, based on the software: (SPSS 9, 10) Survey findings show that % 61.2 of the graduates are males and % 38.8 are females. The graduates average score is 15.75, but the standard deviation and the range is 1.23, 12.66, 18.55 respectively. About 78.7 percent of the graduates expressed satisfaction on their faculty. The most important strength and weakness point of the survey community has been %52.2 in training period and %78.7 in physio-pathological one, respectively. It is concluded that, the average satisfaction of the graduates on university level is 66.11 and there has been a significant promotion in educational program quality, in medical university.

 

The relationship between group productivity, tutor performance and effectiveness of PBL
Keywords: Problem-based learning, tutoring
Authors: Dolmans, D., Riksen, D. & Wolfhagen, I.
Institution: University of Maastricht, Dept. of Educational Development & Research, PO Box 616, 6200 MD Maastricht

Summary: Tutor performance and tutorial group productivity correlate highly with each other. Nevertheless, for some tutorial groups, a discrepancy is found between the two variables. The hypothesis tested is whether a high performing tutor can compensate for a low productive group and whether a high productive tutorial group can compensate for a relatively low performing tutor. Students rated the tutor performance, the tutorial group productivity and the instructiveness of the PBL unit (1-10). In total 287 tutors were involved and were categorized as having a relatively low, average or high score on tutor performance. This was also done for the group productivity score. For each combination, the average instructiveness score was computed. The results demonstrated that the average instructiveness score was higher if the productivity score was higher. The instructiveness score was also higher if the tutor score was higher. However, the average instructiveness score did not differ significantly under different levels of tutor performance, whereas it did differ significantly under different levels of group productivity. It is concluded that a high productive group can too a considerable extent compensate for a low performing tutor, whereas a high performing tutor can only partly compensate for a relatively low productive tutorial group. The findings of this study are in line with earlier studies demonstrating that tutorial group productivity and tutor functioning interact with each other in a complex manner. The implications are that faculty should put more efforts in improving group productivity, eg by evaluating tutorial group functioning at a regular basis.

 

Improving Clinical Competence in Health Issues in a Third Year Pediatric Clerkship
Keywords: Health Issues; Pediatric Clerkship; OSCE
Authors: Bonet, N.; Márquez, M.
Institution: University of Puerto Rico, School of Medicine

Summary: Background: Objective Structured Clinical Examination (OSCE) is used in medical schools to evaluate clinical skills. At the Department of Pediatrics, School of Medicine of the University of Puerto Rico, students' development of clinical competence on health issues, e.g., growth/development, health maintenance, disease prevention and patient education have been a great concern, that is sustained by students' performance on USMLE Steps 1 and 2 and the NBME pediatric subject test.

Methods: To improve students' clinical skills, the faculty implemented the following interventions:

• Students must follow Guidelines for Health Supervision III (American Academy of Pediatrics) for interventions on ambulatory setting and assigned case presentations

• Students are required to present a lecture on the topic of growth and development

• An immunization lecture was added to didactic activities

• Faculty was asked to strengthen their teaching of health promotion, maintenance issues and disease prevention during clerkship rotations.

To assess students' skills after above interventions, the faculty, in 2002, restructured the pediatric OSCE to include one station exclusively for health maintenance and disease prevention.

Results: Outcome data for 2002 and 2003 indicate an improvement of students' performance on USMLE Step 2 on topics of preventive medicine and health maintenance. OSCE's mean scores show students with superior performance on the health maintenance station, overall mean of 85.68%.

Conclusions: There is evidence that selected interventions and OSCE stations to teach and evaluate clinical skills are effective.

 

The Effect of Educational Stressors on the General Health of the Medical Residents
Keywords: educational stressor, general health, medical resident
Authors: Khajehmougahi, N.
Institution: Ahwaz University Medical Sciences

Summary: Introduction: In the age of information and application of technology in today's knowledge area, troublesome regulations and traditional medicine instruction procedures may cause serious stresses and be a threat to General Health (GH) of the students of medicine.

Aim: The purpose of the present study was to determine the effect of current medicine instruction procedures on general health of residents studding in Ahwaz University of Medical Sciences.

Method: Type of the study was cross sectional. Subjects were 114 desirous to cooperation residents in different fields of specialized. The instruments were the Educational Stressors Questionnaire, including 45 four- choice item, and General Health Questionnaire. After completion the questionnaires the results were analyzed through Pierson Coefficiency Correlation procedure using the SPSS.

Results: The residents mentioned their educational stressors as follows: Lack of an arranged curriculum, educational troublesome regulations, deficient educational instruments, and inadequate clinical instruction. 37.6 percent of the subjects appeared to have problems in GH, and, there was observed a significant positive coefficiency (p<0.01) between educational stressors with all the followings: GH, somatic problems, Anxiety, and with disorder in Social functioning.

Conclusion: As it appeared, educational stressors can be a risk factor for the students' GH, which may follow reduced interest, educational fall, and failing to achieve mastering the diagnosis procedures and treating ways. The study's findings suggest basic changes in the current medicine instruction ways.

 

Temperament, character, and academic achievement in medical students
Keywords: Temperament, Character, Academic Achievement
Authors: LEE, YM; HAM, BJ; LEE, KA; AHN, DS; KIM, MK; CHOI, IK; LEE, MS
Institution: College of Medicine, Korea University, College of Medicine, Hallym University

Summary: Objective: This study investigates the relationships between TCI dimensions and the academic achievements of medical students.

Method: Our sample consisted of 119 first-year medical students at the Korea University Medical School during the 2003–04 academic year. The Temperament and Character Inventory (TCI) was administered to all participants during one class in the third quarter of the first academic year of medical studies. In addition, first-year grade-point average (GPAs) scores were obtained. We examined the relationships between individual TCI dimensions and the GPA scores in the analysis by using correlation coefficients.

Results: Our results suggest that NS (Novelty seeking), P (Persistence), and SD (self-directedness) dimensions are associated with academic achievement in medical students. Medical students scoring high on NS and low on P and SD were significantly less likely to sit examinations successfully.

Conclusion: Dimensions of the personality play a major role in the academic achievements of medical students. Personality assessment may be a useful tool in counseling and guiding medical students.

 

High states undergraduate OSCE?s: what do you do for students who require supplementary examinations?
Keywords: OSCE, supplementary examination, practicality
Authors: Worley, P. and Prideaux, D.
Institution: Flinders University

Summary: Increasingly, medical schools are using large scale OSCEs to examine students at key progression points in their undergraduate courses. The reliability and validity of this method of testing is extremely important in a culture where society is demanding high quality standards and students may involve lawyers to overcome perceived unfairness in assessments. Large scale OSCEs require a large commitment from a wide range of clinicians and support staff in both the University and the associated clinical services. This commitment may be given once a year, but what happens when a student is eligible for a medical/compassionate or academic supplementary examination, especially when this examination contributes to a ranking process that determines future career options? And if students with a medical supplementary then qualify for an academic supplementary examination, can you mount a third OSCE?
This paper will examine this important assessment challenge, from both educational and practical perspectives, based on the experience at the Flinders University School of Medicine. We will present a range of solutions to this difficulty and will invite debate from others? experiences in meeting this challenge.

 

Using portfolios to develop and assess student autonomy and reflective practice
Keywords: portfolio assessment, reflective practice
Authors: Toohey, SM, Hughes CS, Kumar RK, O'Sullivan AJ, McNeil HP
Institution: University of New South Wales

Summary: The Faculty of Medicine at the University of New South Wales implemented a new undergraduate program in 2004, which focuses on achievement of a set of eight graduate capabilities. The program emphasis is on producing doctors who have a well integrated knowledge base, are capable of evaluating their own performance, and of setting their own learning agendas. Students have substantial freedom to pursue topics that interest them through project or clinical work. The flexibility of the program, as well as the focus on developing student responsibility and reflective practice, called for a different approach to assessment. As part of an assessment scheme which includes written and clinical exams, individual assignments and group projects, students present a portfolio of their work at three points in the six year program. Students must pass each of the portfolio assessments to progress to the next phase of the program or to graduate. This paper focuses on the distinctive design features of the UNSW portfolio. These include the use of the portfolio as a tool to help students take responsibility for planning and managing their own learning. Marking against the graduate capabilitie through all aspects of the assessment system enables a student to present a profile of performance in regard to each of the capability areas. Included in the portfolio are selected assignment and project work, the full range of teacher grades and comments given in relation to each capability, peer feedback on team work and the student's own self assessment and reflection.

 

Clinical Skills Assessment at Medical Schools in Catalonia (Spain) in the year 2003
Keywords: Assessment, clinical skills, undergraduate, OSCE
Authors: Viñeta M, Kronfly E, Gràcia L, Majó J, Prat J, Castro A, Bosch JA, Urrutia A, Gimeno JL, Blay C, Pujol R, Martínez JM.
Institution: Institut d'Estudi de la Salut

Summary: The Institute of Health Studies jointly with the Catalan Medical Schools have conducted several projects on Clinical Skills Assessment using OSCEs since 1994. In 2003 an Objective Structured Clinical Examination (OSCE) to assess clinical competences for final year medical students was used in seven Catalan Medical Schools. A multiple-station examination, with 14 cases distributed in 20 stations, and a written test, composed of 150 MCQ (20 questions with pictures associated), was designed to assess medical competences. A questionnaire to be answered by the candidates was distributed and implemented at the end of the exam in order to find out the examinees' opinion. The OSCE scored highly on internal consistency with a Cronbach's alpha = 0.86 for the multiple-station examination and 0,83 for the written test. The global mean score for the test was 61.82 % (sd: 6.7). The mean scores, obtained by the 422 medical students who completed the OSCE, for every specific competence assessed, were as follows: history taking 64.8 % (sd: 8.4), physical examination 51.4 % (sd: 11), communication skills 61.4 % (sd: 6.2), knowledge 58.1 % (sd: 10.4), diagnosis and problem-solving 59.8% (sd: 8.9), technical skills 73.9 % (sd: 12.4), community health 64.5 % (sd: 13), colleague relationship 48.6 % (sd: 9.9), research 62 % (sd: 22.5) and ethical skills 62.4 % (sd: 17.1). The examinees' opinion for the organization, contents and simulations was high (main score was more than 8 points in a likert scale over 10 points). OSCE based methodology has proved to be a feasible, valid, reliable and acceptable tool to evaluate final year medical students in our context.

 

Is it possible to conduct high-stake oral examinations in a reliable and valid way for small numbers of candidates with limited resources?
Keywords: Oral examination, structured oral examination, high-stake examination, limited resources, reliability, validity, MCQ, feasibility
Authors: Westkämper R1, Hofer R1, Weber M2, Aeschlimann A3, Beyeler C4
Institution: 1Department of Medical Education, University of Bern, 2Stadtspital Triemli, Zürich, 3RehaClinic, Zurzach, 4Department of Rheumatology and Clinical Immunology/Allergology, University of Bern, Switzerland

Summary: Background: Medical societies face the challenge of ensuring high quality certifying examinations with optimal utility (reliability, validity, educational impact, acceptability, costs).

Aims: To assess reliability and to consider aspects of validity of a structured oral examination (SOE) in a small medical society.

Methods: Thirteen candidates took part in the certifying examination based on a blueprint of the Swiss postgraduate training program in rheumatology. A multiple-choice-question (MCQ) test was followed by a SOE [3 teams of 2 examiners testing 3 cases each in two hours according to previously agreed on criteria]. In addition, communication skills (CS) were assessed on a rating scale [9 items, Likert scale 1 to 4]. Data were analysed by SPSS.

Results: The cases were solved on average by 92% of the candidates (range 77-100%). Correlations of the competence demonstrated in one case with the sum of the results achieved in the other 8 cases ranged from –0.14 to 0.97, indicating a wide range of discrimination power. Nevertheless, overall reliability was high (Cronbach-a 0.88). Significant correlations were found between SOE and CS (r = 0.88, p < 0.001), SOE and MCQ (r = 0.58, p = 0.038), but not between CS and MCQ (r = 0.46, p = 0.110).

Conclusions: Our SOE assessed medical competencies that seem more closely related to CS than factual knowledge tested by MCQ tests and yielded a high reliability. Our design and the efforts of the examiners contributed to a high validity. All together resulted in a satisfactory quality with an acceptable utility.

 

Using real patients in clinical examinations: A questionnaire study
Keywords: Patients; Paediatric; Clinical Exams
Authors: Williams, S.; Lissauer, T.
Institution: Royal College of Paediatrics and Child Health

Summary: There are a number of publications detailing the experience of examiners and candidates during clinical exams. There is little, however, which has documented the experience that patients have despite specific concerns in the use of real patients in such exams. The aim of the current research is to investigate the experience of parents and children who participate in the MRCPCH Part two Clinical and Oral examination and to open up the debate on the ethics of using real patients in clinical exams. Questionnaires were sent to all centres hosting the MRCPCH clinical examinations in June 2003 and February 2004 to capture both quantitative and qualitative data. Overall the results suggest that the majority of children and parents found taking part in the clinical examination a positive one. Multiple regression analysis highlights administrative variables (such as the length of time involved and the conditions at the centre) rather than the consultative variables (such as the interactions with the candidates and examiners) as a major factor in having a negative experience. Whilst this type of research is relatively new, the results of the present survey do suggest that far from being a traumatising or abusive experience that the vast majority of children found taking part in the exam an enjoyable experience. They further suggest that careful attention to the timings and structure of the exam could help to eradicate the potential for a negative experience.

 

Variation on a theme: the use of standardized health professionals (SHP) in an objective structured clinical examination (OSCE) in neonatal-perinatal medicine
Keywords: OSCE, Standardized Health Professional, Neonatal
Authors: Brian Simmons, Ann Jefferies, Deborah Clark, Jodi McIlroy, Diana Tabak and Program Directors of the Neonatal-Perinatal Medicine Programs of Canada (2002-03),
Institution: Depts. Of Paediatrics, University of Toronto, Toronto; University of Calgary, Calgary. Wilson Centre for Research in Education, University of Toronto, Toronto, ON, Canada.

Summary: Background: Standardized patients (SPs) are traditionally used in the OSCE to portray patients or parents. We developed an OSCE for subspecialty trainees in Neonatal – Perinatal Medicine that included SHP roles.

ObjectiveE: To compare reliability of SHP and SP stations.

Design/methods: Two OSCEs conducted in 2002 and 2003 consisted of 14 SP stations, 8 SHP stations and 1 post encounter probe. SHPs included respiratory therapists, nurses, physicians and a medical student. Examiners completed station specific checklists, global ratings to assess CanMEDS roles (medical expert, communicator, collaborator, manager, professional, scholar, health advocate) and an overall global rating. SPs and SHPs completed communication global ratings. Projected alpha coefficients (to a ten-Station OSCE) were calculated, using Spearman-Brown Prophecy formula.

Results: 54 trainees participated. As shown in the table, alpha coefficients were greater than 0.70. There were no significant differences in reliability between SP and SHP stations (p > 0.05). Reliability was consistently higher with global rating scores.

Conclusions: SHPs may be used in OSCE stations, which require medical knowledge and expertise. SHPs could be used in high stakes exams. A formal training program should be considered.

 

General physician view about communication skills & patient education in Shiraz –Iran
Keywords: communication skills & patient education
Authors: Najafipour, F.
Institution: valfajr health center

Summary: General physician view about communication skills & patient education in Shiraz –Iran Fatemeh najafipour – Azam najafipour-Bagher Nasimi Nowadays clinical competency of physicians usually judged based on communication with patient.

Effective communication between physician and patient is one of the most important steps to improve level of health and prevention in society. Applying effective communication skills of physician lead to more involvement role of patient in treatment process. This study has been done to assessment view point of general physician about communication skills & role of patient education in treatment process.

Material & Method: This was a descriptive, cross –sectional study. Data were gathered using a scientifically validated questionnaire which contained closed questions that were focused on communication skills and educational behavior of physician relating to patient. The questionnaire was distributed among 100 general physician who participated in contineous medical education program (CME)

Result showed: 85% general physician stated effective communication is very important in treatment process. 90% of general physician stated educationing patient leads in to more cooperation between the physician and patient for better following up ofthe treatment plan. Only 40% of general physician has been spent adequate time on patient education. The details of results would be presented to the conference.

 

The Impact of the Eighty Hour Work Week on The House Staff at a Large University Affiliated Community Based Teaching Hospital
Keywords: Resident Working Conditions
Authors: Best, K., Weiss, P., Koller, C., Hess, L.W.
Institution: Lehigh Valley Hospital, Department of Obstetrics and Gynecology

Summary: Objective: To determine how the recently mandated eighty hour work week restriction affects the psycho-social well-being and clinical experience of ob/gyn, surgical, and internal medicine residents at Pennsylvania's largest community-based teaching hospital.

Methods: A questionnaire consisting of ten items, each scored on a five-point Likert scale, was distributed to upper year residents in the departments of ob/gyn, surgery, and internal medicine. The questionnaire addressed residents' perceptions of the psycho-social and clinical impact of the mandated eighty hour work week as well as their program's level of compliance. Resident participation in sentinel cases and/or procedures prior to and after the mandated hours was evaluated to determine the impact on clinical experience.

Results: Final results pending; however, preliminary data suggest that the ACGME work restrictions have positively impacted upon resident stress/fatigue and home life without compromising the quality of neither patient care nor patient safety. A small, but statistically non-significant impact on surgical and/or procedural experiences was noted.

Conclusions: Transitioning to the eighty hour work week prompted numerous concerns from house staff and faculty. Thus far, our data suggests that there is no negative impact on the quality of patient care. The data also shows a commitment to compliance with the mandated work restrictions despite the concerns.

 

M.D.
Keywords: Teaching Scholarship, Faculty Recognition
Authors: Wolpaw, D., Wolpaw, T.
Institution: Case School of Medicine (Case Western Reserve University School of Medicine)

Summary: Traditional approaches to recognizing contributions to medical education are largely dependent on learners and subject to popularity and exposure bias, impacting only a small percentage of our teachers. With the goal of a process that would be inclusive, broadly applicable, transparent, and academically rigorous, we set out to address the challenge of faculty recognition for education in three steps: 1) Track faculty effort in medical education through an electronic summary, 2) Ask faculty to describe a recent educational effort in a 1-2 page "Best Contribution" narrative, 3) Subject these narratives to an academically rigorous peer review process that serves as the basis for recognition awards. This program is designed to evaluate scholarship and quality in the various products of educational effort, rather than take on the complex and ultimately subjective challenge of fairly evaluating the quality of the teachers themselves. It is expected that the impact of this program will be seen in four ways: 1) Enhancing the profile of education and educators 2) Opening up the classroom for better communication on new and/or successful ideas, 3) Creating a straightforward template for teaching recognition that can be easily translated across institutions, and 4) Establishing a broad-based peer review network for educational ideas and products. Program evaluation includes tracking submissions, peer-review scores, and subsequent publications, as well as surveying attitudes of applicants, non-applicants, and members of the promotions and tenure committee to assess impact and changes in institutional culture.

 

Starting Work - Ready or not? Views of commencing medical interns on the skills developed during their undergraduate program
Keywords: curriculum evaluation, undergraduate medicine, graduate skills
Authors: Lindley, J.; Liddell, M.
Institution: Monash University

Summary: Decisions about the quality of medical education rely, in part, upon the performance of new graduates in their roles as beginning doctors. The success of the course in preparing medical graduates is dependent upon graduates being equipped with the necessary knowledge, skills, attitudes and professional behaviours. As the practice of medicine requires the application of knowledge and skills in a clinical setting embedded within a social context, graduates must be capable managers of health care across a complex range of situations. To evaluate graduate outcomes the Faculty of Medicine at Monash has collected data from two consecutive cohorts of graduates during their first year as medical practitioners in the hospital system. The second cohort had undertaken a final year program that was significantly revised compared to that undertaken by the first cohort. The project gathered graduates' views on the success of their undergraduate course in preparing them for the demands of the medical workplace. Responses were sought on a range of vocational skills comprising clinical tasks, procedural techniques and professional relationships. Data from the surveys was analysed and results for clinical tasks, practical skills and professional relationships revealed some differences between the cohorts with students from the second cohort indicating that they perceived themselves to be slightly better prepared than their counterparts in the previous cohort. Data analysis also allowed identification of specific areas for curriculum review.

 

Influence of the APLS and PALS courses on self-efficacy in paediatric resuscitation
Keywords: APLS, PALS, self-efficacy, paediatric, resuscitation
Authors: Turner, N.M. Paediatric Anaesthesiologist Dierselhuis, M.P., Final year Medical Student Draaisma, J.Th.M., Paediatrician ten
Cate, Th.J., Professor in Medical Education
Institution: Wilhelmina Children's Hospital and Faculty of Medicine, University Medical Centre, Utrecht,
and St Radboud Medical Centre, Nijmegen, The Netherlands

Summary: Introduction: Most life support courses recognise that performance during resuscitation depends partly on attitudinal factors1. The current study was designed to assess the effect of following either the Advanced Paediatric Life Support (APLS) or the Pediatric Life Support (PALS) course on the learners' self-efficacy in respect of six psychomotor skills. Global self-efficacy at paediatric resuscitation was also measured.

Methods: All candidates attending the courses were sent an anonymous questionnaire before the course and three and six months later. They were asked: 1) to rate their self-confidence in respect of the six skills and globally using a 100 mm visual analogue scale; 2) to estimate the frequency of performance of the skills; 3) to nominate two direct colleagues with a similar level of experience who did not intend to follow either of the courses.

Results: Preliminary results suggest that attending the courses does lead to increased self-efficacy both globally and in respect of defibrillation, insertion of an intraosseous device and umbilical vein catheterisation. Prior to the course, candidates appear to have less self-confidence about intubation and defibrillation than their colleagues who choose not to follow the course. See graph Discussion Although this study makes use of a new method of measuring self-efficacy, and despite the fact that the relationship between self-efficacy and performance is variable2, we cautiously conclude that the APLS and PALS-courses seem to have a positive affective effect on the candidates which might be associated with improved performance of paediatric resuscitation.

References: 1. Carley S, Driscoll P, Trauma education, Resuscitation 48 (2001) 47-56. 2. Morgan PJ , Cleave-Hogg D, Comparison between medical students' experience, confidence and competence. Medical Education 36 (2002) p 534-539.

 

Towards the promotion of quality in Medical Education at the Faculty of Medicine of the University of Porto (FMUP): Connecting the Evaluation Process with the Proposal of an Innovative Curriculum of the FMUP
Keywords: Evaluation, Curriculum
Authors: Tavares, M.A.F., Bastos, A., Sousa-Pinto, A.
Institution: Faculty of Medicine University of Porto and School of High Education, Politechnic Institute Viana do Castelo

Summary: From 1998, the medical course of the FMUP was evaluated under several institutional initiatives, all of them within the scope of quality programs in higher education. As part of these programs, the CNAVES (National Council for Higher Education Evaluation) provided the guidelines for a new evaluation process of the medical course, during the academic year 2002-2003. The answer to this request triggered a dynamic process in FMUP involving the whole institution, being performed as a developmental evaluation. The results obtained in resources, administration, education and research, allowed to draw a developmental strategic view of FMUP. Evaluation of the curriculum provided a set of strengthnesses and weaknesses that reinforced the urgent need to reform the curriculum content and integration of subjects, merging basics with a clinical view from the beginning of the medical course, enhancing the clinical component and introducing optional modules. Within the development of a quality program, in the same academic year, the Curriculum Committee of FMUP started to design the new curriculum. The basic structure of the emerging proposal resulted on a core curriculum with study optional modules, providing vertical integration within a system-organization model and horizontal integration within a theme/subject organization. This model will overcome the weaknesses demonstrated in the different evaluation processes of the course, supporting and enhancing the strengthnesses of the Institution. The present work will describe the process of developmental evaluation settled at the FMUP and the central guidelines that will provide the foundation of the new curriculum (Supported by FMUP).

 

Students perceptions of learner-centered, small group seminars on medical interview
Keywords: learner-centered method,medical interview, undergraduate education, video-tape review
Authors: Saiki, T. Mukohara, K. Abe, K. Ban,N.
Institution: Nagoya University Hospital

Summary: Background: Experts in medical education recommend learner-centered instructional methods. We utilized such an experiential, interactive method for a two-day, small group seminar on medical interview and communication skills for students at the Nagoya University School of Medicine. It was part of a 1-week clerkship rotation at the Department of General Medicine.

Purpose: To describe the perceptions of medical students of the learner-centered, interactive, small group seminar for medical interview and communication skills.

Methods: A 10-item questionnaire was administered to a total of 101 students who participated in the seminar throughout the academic year April-2003 to March-2004. The questionnaire items were related to the process of a learner-centered educational method and included a global assessment of satisfaction with the seminar. Each item was rated on a 4-point scale labeled as unsatisfied, somewhat unsatisfied, somewhat satisfied, and satisfied. The proportions of students who were satisfied were calculated for each item.

Results: Seventy-six percent of students were satisfied with the seminar overall. Among the other 9 items engaging all students in discussion was rated the highest (80% satisfied). The items concerning structuring the seminar in logical sequence and managing time well were rated the lowest (39% and 42% satisfied, respectively).

Conclusion: The learner-centered seminar on medical interviewing was well received by students, especially for its interactive methods. Items that reflect more teacher-centeredness such as structuring the seminar in logical sequence and managing time well received lower satisfaction ratings.

 

Formal education in the early years of postgraduate training: has the pendulum swung too far?
Keywords: formal, informal, experiential, work-based, supervision,
Authors: Agius, S J.; Willis, S; Mcardle, P; O'Neill, P
Institution: University of Manchester

Summary: Formal education in the early years of postgraduate training: has the pendulum swung too far?

Background: The relationship between hospital consultants and doctors in training is set to experience yet further transformation with a Government-instigated modernisation process in postgraduate medical education.

Method: The University of Manchester has conducted a qualitative study of the culture of medical education in the SHO grade, based on interviews with 60 clinicians and educational leaders. These were recorded, transcribed and subjected to content analysis. For this study, data was coded to determine perceptions of formal and informal education.

Results: Within hospital-based communities of practice in medical education, the centrality of the relationship between consultant and doctor in training remains undiminished. The educational experience of a doctor in training depends largely upon the consultant(s) to which (s)he is assigned. There is a common perception that too much emphasis is being placed on formal education, to the detriment of work-based experiential learning.

Discussion: There is a perception that the early years of postgraduate medical training have altered as a result of external variables (reduced hours, shift systems). There is a consequent sense of loss at the reduction in contact between trainer and trainee, compounded by a belief that education is increasingly dislocated from the work-place through the use of formal classroom-based techniques. If the Government's new model of training is to work, then education should be located firmly in the work-place, within a formalised structure that makes learning explicit and foregrounds the importance of supervision and feedback. This will assist in retaining consultant commitment to the educative role, reducing the sense of conflict between service and training, whilst providing an effective means for the doctor in training to harness the requisite knowledge, skills and attitudes as an itinerant learner within a coherent structure.

 

Which factors are associated with the evaluation of a post-graduate course in public health?
Keywords: evaluation, public health, post-graduate course
Authors: Revuelta Muñoz, E.; Farreny Blasi, M; Godoy Garcia, P
Institution: Institut Català de la Salut

Summary: Introduction. Evaluating postgraduate courses is essential for increasing their quality and adapting them to the needs of students. The objective of this study was to analyse whether the student-related characteristics have an influence on their evaluation of post-graduate courses.

Methods: The population of the study was 70 students from the "Diplomado en Sanidad" a post-graduate course in Public Health held in Lleida (Spain) from 2001 to 2003. This course was organised in 8 modules: "Introduction to Public Health", "Statistics", Transmitted Diseases", "Protocols in Cronic Diseases", "Health Protection" (HP), "Epidemiology", "EpiInfo", and "Research Methodology" (RM) The first 4 modules were theoretical and the other 4 had a practical approach. Independent study variables were: student profession, gender and age. The dependent variable was the global evaluation of each module. The information was obtained from self-administered questionnaire. The question related to the dependent variable was "Do this course generally meets your needs?". It was scored between 1 ("total disagreement") and 5 ("total agreement"). Each variable was characterized with a mean and its standard error. The relationship between the dependent and independent variables was studied using an ANOVA test with a p value < 0.05.

Results: The students' evaluation of the modules ranged between 3.2 for Statistics and 4.2 for RM, with significant differences (p<0.001). Epidemiology, EpiInfo and HP were also significantly well-valued.. We did not detect any significant differences for age and gender.

Conclusions: Modules with a more practical approach receive the best evaluations and greatest acceptation, independent of student profile. We should therefore adapt a more practical approach in our lectures.

 

Managing change in postgraduate medical education: what the consultant saw
Keywords: organisational change, educators' role
Authors: Agius, S J.; Willis, S.; Mcardle, P.; O'Neill, P A.
Institution: University of Manchester

Summary:

Background: The structure and content of postgraduate medical training in the UK are undergoing a major modernisation process. This will have a significant impact on the role of hospital consultants with educative responsibilities.

Methods: The University of Manchester has conducted a qualitative study of the culture of medical education in the SHO grade. The study includes an exploration of hospital consultants' perceptions of the modernisation process, and its impact on their role. Interviews were conducted with 28 consultants with varying education-related duties. These were recorded, transcribed and subjected to content analysis.

Results: There is widespread uncertainty about the nature of change to postgraduate medical education, particularly amongst front-line clinical educators with no additional education-management role. Even those with such roles (e.g. Medical Directors, Clinical and College Tutors)display considerable levels of anxiety and confusion about the modernisation process. There is a strong sense that educational supervisors should have dedicated time to plan and deliver training. This should be supported with appropriate and sustained training for their educational role.

Discussion: Hospital consultants are concerned about the impact of modernisation in postgraduate medical education on their own role. This is understandable given the many pressures on their time, although much of their uncertainty is a result of limited awareness about change combined with communication deficiencies from Government downwards. Development of the regional and local infrastructure that supports medical education is required. The majority of consultants are committed to the education of doctors in training, bur greater recognition and support of their role is necessary if goodwill is to be maintained.

 

Does portfolio contribute to the development of reflective skills?
Keywords: portfolio, assessment, self-evaluation
Authors: Driessen, E.
Institution: Maastricht University

Summary: Questions about the utility of a portfolio as a method for the development and assessment of reflective skills are frequently raised in the literature. However, the literature shows few studies which report answers to these questions. The purpose of this presentation is to give more insight in the practical use of a reflective portfolio in medical undergraduate education. In our research, we were specifically interested in the conditions that promote the development of reflective skills. We have interviewed teachers about their experiences with coaching and assessing students in keeping a portfolio. While doing this, we focussed on the teachers' perceptions of portfolio and reflection. We used grounded theory methodology to explore teacher perceptions in an open and broad way. All mentors in our study agreed that the process of compiling and discussing a portfolio contributes to the development of reflective ability. The thinking activities that a student undertake while compiling his portfolio are essential for this effect. Factors which are decisive for the successful use of portfolio are: mentoring, portfolio structure, the nature of student experiences, assessment and perceived benefit by the student.

 

Standardized patients in a catalan medical school: a way to learn competencies
Keywords: Standardized patient, undergraduate, competencies
Authors: Descarrega-Queralt, Ramon; Vidal, Francesc; Castro, Antoni; Solà, Rosa; Olivares, Marta; Oliva, Xavier; Ubía, Sandra; Nogués, Susana; Escoda, Rosa; González-Ramírez, Juan
Institution: Facultat de Medicina i Ciències de la Salut - Reus. Universitat Rovira i Virgili de Tarragona

Summary: In 2001 the Faculty of Medicine of Universitat Rovira i Virgili started a project on competencies learning. The participants in the project were students of the last courses of Medicine. Cases with standardized patients were the formative instrument. The competence components analysed were: history taking, physical examination and communication skills. An opinion questionnaire was undertaken by 50 participants. Through the questionnaire 18 different areas were evaluated, using a Likert scale, relating to logistics, organization, contents and learning impact. Results proved this project is feasible and well accepted, and is a good method to improve the learning process of medical students.

 

A survey of cheating on tests among Catholic University of Chile medical students
Keywords: cheating
Authors: WRIGHT, A.; Trivino MD,X, Sirhan MD, X; Moreno MD, R
Institution: Pontificia Universidad Católica de Chile, Escuela de Medicina

Summary: Cheating is an unethical behavior. In medical schools, this represents a recurrent problem, with a reported frequency close to 60 percent. To investigate cheating on tests, an anonymous questionnaire was distributed among 97 fourth-year medical students. Students were asked whether they have seen other students cheat and their attitudes about cheating on: ethical, behavioral, and legal grounds. They also were questioned on the reasons for, consequences of, and deterrents to cheating. Of the students, 86% reported that they had seen other students cheating. Ninety percent considered cheating unethical, 77% as reprehensible, and 43% as unlawful. The main reasons for cheating were to obtain better grades (21%), insecurity about the correct answer (16%), and lack of study (13%). Eighty-six percent reported negative consequences related to cheating, 91% considered it detrimental to the student who cheats, and 63% felt cheating to be harmful to peers. Slightly more than half of the students expressed that cheating is not related to inappropriate behaviors with patient care. The expected increase in grades was mentioned as a positive consequence (60%), especially when applying for residency. The main deterrents proposed were improved test quality (35%), more effective monitoring (28%), and application of institutional regulations. Interestingly, a high percentage of students were in agreement in their responses and attitudes to cheating. It is remarkable that students perceive test cheating as unethical and having negative consequences. This constitutes the ground basis to develop a nurturing culture of Medicine, enhancing honesty, integrity, and professionalism.

 

Practising Doctors Can Accept Review
Keywords: peer review, acceptance
Authors: Kaigas, T.
Institution: Cambridge Hospital

Summary: Acceptance of peer review by doctors in a Canadian community hospital was assessed using a post-review survey. In this program, practising doctors were systematically reviewed in the hospital using a multimodal review process. They then filled out a survey regarding their impression and degree of satisfaction with the review. High acceptance was demonstrated with 92% seeing the review as positive overall. Possible reasons for this are discussed and proposals presented to gain acceptance, even with sceptical groups of doctors.

 

The feasibility, reliability, and construct validity of a program director's (supervisor's) evaluation form for medical school graduates
Keywords: outcomes assessment
Authors: Steven J. Durning, Louis N Pangaro, Linda Lawrence, John McManigle and Donna Waechter
Institution: Uniformed Services University, Bethesda, Maryland 20814, USA

Summary: Purpose: We determined the feasibility, reliability and construct validity of a supervisor's survey for graduates of our institution.

Methods: We prospectively sought feedback from Program Directors for our graduates during their first post-graduate year. Surveys were sent out once yearly with up to 2 additional mailings. For this study, we reviewed all completed Program Director Evaluation Form surveys from 1993-2002. Interns are rated on a 1-5 scale in each of 18 items. Mean scores per item were calculated. Feasibility was estimated by survey response rate. Internal consistency was determined by calculating Cronbach's alpha and with exploratory factor analysis with varimax rotations. Assuming that our graduates would show a spectrum of proficiency when compared to graduates from other schools, construct validity was determined by analyzing the range of scores, including the percent of scores below acceptable level (2 or 1, see below table).

Results: 1297 surveys (81% graduates) were returned. Cronbach's alpha was 0.93. Mean scores across items were 3.81-4.2 with a median score of 4.0 for all questions (standard deviations ranged from .76-.84).

Performance (rating)

%Graduates

Outstanding (5)

31.5%

Superior (4)

36.4%

Average (3)

25.2%

Needs Improvement(2)

3.5%

Not Satisfactory (1)

.1%

Factor analysis found that the survey collapsed into 2 domains (69% of the variance): professionalism and knowledge.

Conclusions: Our survey was feasible and had high internal consistency. Factor analysis revealed two complimentary domains (knowledge and professionalism), supporting the content validity. Analysis of range of scores supports the form's construct validity.

 

The survey of general physicians` views about quality of compiled and continuing education programs
Keywords: Quality- Continuing Education- GP
Authors: Marashi, T. Shakoorniya, A – Heidari soorshjani, S
Institution: Faculty of Health,Ahvaz Medical Sciences University
.

Summary: Title: The survey of general physicians` views about quality of compiled and continuing education programs. The continual education has been necessarily accepted in the world, in this direction, the instructional needs and determining the priority of continuing education programs prepare the possibility of obtaining the desired quality. The present study has been done to determine Gp`s view, who have participated the compiled programs of continuing education according to quality of the program based on their contents, proportion with the occupational needs, and to make interest in specialty study. This study is a descriptive – analytical study, and the samples were 451 (GP) who have participated the continuing instructional programs in 2002. Data gathered through questionnaire The results of this study according to 4 especial research targets are to be wet forth, that 51% of all the participating, have very well evaluated the success of program in order to present the new scientific subjects; 63% of all the participating, to be proportional the programs contents with the occupational needs and 61% of all the participating the program competence on making interest in personal study. The forth-especial target of his research was the perception of the most important motivation to participate the program have evaluated, orderly, the review of information 2.70, seeking remedies in solving the professional problems 2.63, information and experience interchanges 2.84 and gaining points 3.19.This programs have been completely successful ones, but it is recommended that we could obtain the further qualitative promotion of instructions by presenting the new scientific appreciative subjects, using the various methods in performing the instructional programs, also attending to coincidence of contents with occupational needs of GP and making reasons on them by setting forth the important questions.

 

The effectiveness comparison of two educational methods on academic advisors Knowledge, Attitude, and Practice
Keywords: Academic Advisor, Knowledge, Attitude, Practice, Medical Students, Educational Workshop
Authors: Hazavehei, S. Department of Health Promotion and Education, School of Health, Isfhan University of Medical Sciences, Isfhan, Iran Hazavehei@hlth.mui.ac.ir
Institution: Isfahan University of Medical Sciences

Summary: The purpose of this study was to investigate the effect of two educational methods on (workshop and having educational material) the level of knowledge, attitude, and practice of Hamadan University of medical sciences. In this study, participated in the pre-test Section (before the intervention) and participated in the experimental program. The AA in experimental program randomly divided in two groups. The Group 1 (N=43) participated in the one day workshop as an educational method one and Group 2 (N=44) received only educational material as an educational method two. Data collection for knowledge, attitude, and practice was conducted by the valid and reliable questionnaires before educational program and after one academic semester prior to the program. The results insinuated that the significant differences existed between (p<0.001) the level of knowledge about important educational policy and regulation related to academic guiding and counseling students in pre-test group (M=10.77, SD=4.2) compare to Group 1(M=14.77), and Group 2 (M=11.54, SD=2.76). This differences existed only between Group1 with Group 2 and pre-test group. There was a significant difference (p<0.05) between the level of attitude in Group 1 (M=61.79, SD=5.78) with pre-test group (M=57.20, SD=11.6). This study shown that developing educational workshop program based on roll playing, group discussion, and group working and interaction could be affected to the behavior and attitude that result improving their skills and abilities. Finding of this research may be able to be beneficial for developing educational program for AA of universities.

 

Assessment of the intra-service rotations in anaesthesiology and reanimation: change in methodology
Keywords: Assessment in anaesthesiology, improving trainee´s rotation, trainee´s evaluation.
Authors: Rincon, R.
Institution: Hospital Germans Trias i Pujol

Summary: Assessment of the intra-service rotations in Anaesthesiology and Reanimation: change in methodology
Authors: Rincón R, Hinojosa M, Llasera R, Escudero A, Moret E, García Guasch R. Hospital Germans Trias i Pujol. Badalona. Barcelona (Spain)

Introduction: In order to improve the supervision of the trainee rotations, the anaesthetist in charge of each area will complete an evaluation form. The change in methodology will improve the personal performance of the trainee.

Objectives: Improve the final result, reaching the stated objectives more successfully, through the identification of the strengths and weaknesses that need to be improved.

Material and methods: Once the consultant has defined the objectives of their area, the evaluation form is completed at the halfway point and at the end of the period, both by the consultant and the resident independently. Both evaluation forms are compared and contrasted establishing the points to be improved and comparing the progress of the learner. The evaluation include seven aptitude and five attitude criteria. Both are conducted in a qualitative way with a descriptive, non-numerical scale.

Results:

-All the trainees and the consultants agree to being evaluated and to evaluating respectively.
-75% of the time, the trainee is unaware of the detected errors, and once informed modified 50% of the errors. If the error is in clinical theory is easier to modify compared with the practical error, since this depends on the trainee´s learning curve in that specific technique.
-This system improves the quality of observation, the setting of objectives and evaluation.
-Academic activity was re-activated in most of the areas.

Conclusion:

-The evaluation form is useful in the detection of problems.
-It improves the quality of training if both evaluations are done during each period.
-The interest of the consultants in residents´ training has been re-awakened.
-The extra work needed in this evaluation process requires an allocation of six hours a week for the tutor.

 

Rheumatology Review Course on Personal Learning Projects as a Method of Continuing Professional Development
Keywords: Personal Learning Projects; Continuing Professional Development
Authors: Bell, M., Sibbald, G.
Institution: Sunnybrook and Women's College Health Sciences Centre

Summary: Abstract

Purpose: To determine whether Rheumatologists adopt and adhere to the use of personal learning projects (PLPs) as a method of continuing professional development (CPD) and maintenance of certification following the introduction to the concept of PLPs and their utilization within a review workshop.

Methods: Rheumatologists attending a 2 day continuing education workshop were involved in a 30 minute interactive lecture outlining the concept of learning portfolios and how to use a PLP as a method of continuing education. Attending Rheumatologists filled out a pre and post-workshop evaluation questionnaire followed by the completion of a 3 month follow-up questionnaire.

Results: 25 Rheumatologists who have been in practice for a mean of 16 years completed the pre, post and 3 month follow-up questionnaires with a similar number of males and females. Average awareness of CPD methods was 7.8 post workshop, with a slight increase in 3 month follow up results. In 2002 the average number of PLP was reported at 5.8 with a median of zero (range 0-120), while post and 3 month-workshop results show a personal increase in PLP in 2003. Time constraints still remained the number one barrier for personal involvement with CPD, while the use of paper diaries remained the favoured PLP method of recording.

Conclusion: There was an increase in Rheumatologists awareness and application of PLPs, which was sustained at the 3 month period. The benefits and ease of PLP as a method of CPD require reinforcement to improve adoption and adherence.

 

Patient Satisfaction In An Ambulatory Rheumatology Clinic
Keywords: Patient Satsifaction; Rheumatology Clinic
Authors: Bell, M., Bedard, P.
Institution: Sunnybrook and Women's College Health Sciences Centre

Summary: Purpose: To determine patient satisfaction with care in the Division of Rheumatology at Sunnybrook & Womens College HSC across six domains: provisions of information, empathy with the patient, attitude towards the patient, access to and continuity with the caregiver, technical quality with competence, and general satisfaction.

Methods: Patients who had a diagnosis of chronic arthritis and had been seen in clinic on at least three prior occasions were asked to complete the Leeds Patient Satisfaction Questionnaire (LPSQ) once they had registered for the appointment. The LPSQ is a 45-item Likert scale (1-5: <3 dissatisfied: >3 satisfied) survey measuring satisfaction with care across the six domains described above. The attending rheumatologist and other clinic medical staff were not made aware of which patients had completed the questionnaire. All questionnaires were scored according to the guidelines of the Leeds Satisfaction Questionnaire, and were checked by two independent investigations to minimize arithmetical errors. Descriptive statistics were calculated.

Abstract
Results: Eighty-seven patients completed the questionnaire. The mean normalized Overall Satisfaction score, combining satisfaction rates across all subgroups, was The overall mean scores of the subgroups were Giving of information Empathy with the patient technical quality of competence Attitude towards the patient Access to the service and continuity of care General Satisfaction

Conclusions: Patients appear to be very satisfied with the care they receive. Areas that could be improved in the future include patient education regarding clinic services, waiting times, and receiving urgent consultation if needed.

 

Determination of the Effect of a Teaching Skills Workshop on Interns' Evaluations of their Residents-as-Teachers
Keywords: Teaching skills – Clinical teaching – Educational spiral - Needs Assessment
Authors: Ajami, A.(M.D.), Soltani Arabshahi, S.K. (M.D.), Siabani, S. (M.D.)
Institution: Iran Medical University, Deputy of Education, Medical Educational & Developmental Center

Summary: Introduction: Residents play an important role in teaching medical students and there is a large number of contact hours among them. So developing teaching skills, being familiar with innovative teaching styles, knowing how to increase the educational efficacy, providing an educational spiral are the necessities of Residency Programs.

Objective: To determine the effect of teaching skills workshop on the teaching role of residents.

Materials & methods: This is a Quasi- experimental study. A self-administered questionnaire was distributed among interns of pediatrics and internal medicine wards in 2 universities. Then the randomized selected residents participated in an 8 hours workshop. 2-3 months after the workshop, the interns again completed the questionnaire.

Results: There was a significant difference between the mean group ratings for all of the teaching skills characteristics in both universities.The overall teaching skills in Iran University and 5 categories of teaching skills in Kermanshah University except "Giving feedback", and "Professional characteristics", were increased.Overall teaching effectiveness of residents was increased after the workshop.

CONCLUSION: Increasing scores of skills after the workshop, reveal that training programs and teaching skills courses for residents should be performed as formal instructional residency programs.A needs assessment should be done to develop such a course.

 

The dual roles of the global rating scale on a 30 station Objective Structured Clinical Examination for chiropractic radiologists: reward and punishment, plus standard setting
Keywords: OSCE, borderline method, global rating scale
Authors: Lawson, D.; DeVries, R.
Institution: Lawson: University of Calgary, DeVries, Northwestern Health Sciences University

Summary: A global rating scale (0=outright fail, 1=borderline fail, 2=borderline pass, 3=outright pass) was added to a detailed checklist for each case of a 30-station chiropractic radiology OSCE. The borderline candidate method was used to set the minimum performance level (MPL) and compared to the previously used modified Ebel method for ease of use and examiner confidence in the MPL. Reliability (Alpha) for station totals, global scales, and combined were high (.88, .90, .94). The correlation (Pearson's) to the sum of global scores and total checklist scores was .94. The MPL was 70% of the marks available, and 80% of candidates were successful. Feedback from examiners revealed that they unanimously supported the continued use of the global rating scale. The main reasons cited were 1) that they felt that the detailed checklist advantaged weaker candidates and that global rating scale allowed the examiners to award strong candidates who may not have got all the checklist points, and punish weaker candidates who got most of the checklist points but were very disorganized in their approach, and 2) they felt more confident in the MPL set by the global rating scale in comparison to the Ebel method.

 

Evaluating sports residency admission procedures for the College of Chiropractic Sports Science Residency Programme
Keywords: admission, interview, factor analysis
Authors: Lawson, D.; Uchacz, G.
Institution: University of Calgary (Lawson), Priviate Practice (Uchacz)

Summary: A pilot project introduced the use of a videotaped structured interview to reduce costs in the admission process. Candidates were videotaped while being interviewed by a panel. The interviews were combined with letters of reference, letters of intent, a questionnaire, and college transcripts and scored by the panel and 4 Fellows spread across Canada. Those scoring the video-tapes responded to 2 questionnaires, how helpful each of the 5 processes were in evaluating the candidates, and satisfaction with the use of video-taped interviews. Internal reliability of the admission instrument was estimated (Cronbach's Alpha 0.90). A multiple regression of the 5 processes to final rating was performed (88% of the variance was explained by the Questionnaire alone). Item Response Theory (IRT) was applied to the data for measures of inter-rater reliability (exact agreement 36% of the time when 32% agreement was expected with the model.). An exploratory factor analysis was performed to determine what traits were being measured (collapsing the 35 rating scales to 7 factors). The raters found the reference letters, the letter of intent and the interview most helpful. The use of videotaped interviews was rated as being an acceptable alternative to "live" interview for 75% of the raters.

The contribution of standardized patients to error variance in candidate scores on a high stakes objective structured clinical examination
Keywords: standardized patient, OSCE, SP error variance
Authors: Lawson, D.; Harasaym, P.
Institution: University of Calgary

Summary: The purpose of this research project was to determine if differences in standardized patient (SP) performance contribute to error variance in candidate scores. A 10-station OSCE was administered to 124 candidates through the Canadian Chiropractic Examining Board. There were 50 examiners and SPs involved over five parallel tracks. To separate SP performance from the examiner stringency/leniency effect, SPs changed tracks at mid-day. The data were analyzed by the manyfacet Rasch model (MFRM) of Item Response Theory. The standard deviations of the logit measures for candidates, examiners, and SPs were .35, .27, and .28 respectively. The MFRM demonstrated that SP variance is similar in size to examiner variance. The MFRM analysis yielded evidence of an SP stringency/leniency effect. Some SPs more readily yield information to candidates than others. Our conclusion is that SPs do contribute to error variance in candidate scores at approximately the same size as the examiner stringency/leniency effect, and that SP error variance can only be corrected for if SPs remain in the same station and perform sufficient times in the day to have a stable measure. The MFRM identifies SPs at the extremes of the stringently/leniency continuum and allows for remedial training.

Testing a Theoretical Model of Multi Source Feedback Physician Performance
Keywords: Physician performance, structural equation modeling, multi source feedback
Authors: Violato, C., Lockyer, J., Fidler, H. & Toews, J.
Institution: University of Calgary

Summary: Objective: To empirically test a theoretical model of physician performance.

Methods: Performance data for 308 physicians derived from four sources (self, patient, peer and co-worker ratings) were tested within a structural equation modeling. The physicians, selected using proportionate random sampling and stratified by urban and rural communities, had all been registered with the licensing body for more than five years and were generalist physicians from the disciplines of family medicine, obstetrics and gynecology, internal medicine, and pediatrics.

Results: A four-factor structural model of physician performance was based on the four data sources and fit to the data using structural equation modeling methods. The comparative fit index (CFI = .96) was high indicating that the model fit the data well (residual mean square = 0.07). The intercorrelations between peer, co-workers and patient data indicated that while each assesses the physician from their unique perspective, they also intercorrelate with each other (r = .20 to .31).

Conclusions - The model provides a multi-source and multi-dimensional approach to assessing physician performance, with peers, patients, and co-worker assessing the physician from their own unique perspective they also concur on several dimensions.

 

A Pilot Program to Assess International Medical Graduates holding Limited Licenses in Canada
Keywords: international medical graduate, multi source feedback, 360 degree evaluation, physician assessment
Authors: Lockyer, J.; Blackmore, D.; Crutcher, R.; Ward, B.; Salte, B.; Shaw, K.; Wolfish, N.; Fidler, H.
Institution: University of Calgary (Lockyer, Crutcher, Fidler) Medical Council of Canada (Blackmore, Wolfish) College of Physicians and Surgeons of Alberta (Ward) College of Physicians and Surgeons of Saskatchewan (Salte, Shaw)

Summary: International medical graduates (IMGs) may provide services in Canada under a 'defined' license prior to the successful completion of Medical Council of Canada examinations. Study tested feasibility and psychometrics of a multi source (360-degree) evaluation. 15 physicians were recruited for assessment by 25 patients (13 items), 8 medical colleagues (22 items), 8 non MD co-workers (12 items), and self (21 items). Instruments used 5 point assessment scales (5 = high) to examine professionalism, communication, medical skill, team work and patient safety. Two (2/15) physicians were unable to participate. Response rates were high with 88% of patient, 91% of medical colleague, 98% of co-worker surveys and 13 self assessments returned. Mean ratings on all surveys were between 1 and 5. The mean self rating was marginally higher than the mean medical colleague (4.62, sd = .21 vs. 4.49, sd = .67). Most items performed well. A few items exceeded 20% 'unable to assess' rate. Cronbach's a was .81 for self and > .94 for other instruments. Multisource evaluation is feasible as a monitoring tool. A follow-up study (n=20) is underway to test feasibility of data collection by internet and interactive voice response.

 

Parent evaluations of paediatric interview skills
Keywords: interview skill parent evaluation
Authors: Maree OKeefe, Justine Whitham
Institution: University of Adelaide

Summary: Concerns regarding the reliability of patient evaluations have limited their use in medical student learning. A program was developed to obtain parent evaluations of student paediatric interview skills for feedback and to identify students at risk of poor performance in summative assessments. 130 parent evaluations were obtained for 67 students (parent participation 72%, student 58%). Parents competed a 13-item questionnaire (maximum score 91, higher scores = higher student skill level). Students received their individual parent scores and de-identified class mean scores as feedback, and participants were surveyed regarding the program. Parent evaluation scores were compared with student performance in faculty assessments of clinical interview skills. Parents supported the program and participating students valued parent feedback. Students who received a parent score that was less than one standard deviation below the class mean ('Lowscore' students) obtained lower faculty assessment scores than did other students (M±SD, 59%±5 vs 64%±7 p<0.05). Obtaining one 'Lowscore' was associated with increased risk of obtaining a faculty assessment score below the class mean (OR 4.5, CI:1.3,15.7; sensitivity 0.38, specificity 0.88). Parent evaluations provided useful feedback to students and identified one group of students at increased risk of weaker performance in summative assessments.

 

Developing and Implementing an Educational Assessment program: the approach in one Mexican Medical School
Keywords: Assessment program, Student educational outcomes
Authors: Professor Todd W. Ellwein; Julio Cesar Gomez Fernández M.D.; Pilar Talayero y Tenorio M.D.
Institution: "Dn Santiago Ramon y Cajal" School of Medicine, Universidad Westhill

Summary: An educational assessment program was recently developed and implemented at the Dn. Santiago Ramon y Cajal School of Medicine at Universidad Westhill in Mexico City, Mexico. The objective of the program is to improve student educational outcomes. The educational assessment program has three major steps: 1) Laying the groundwork. Activities include establishing/communicating the reason for initiating the assessment plan; creating the assessment team and establishing leadership and oversight responsibilities; building "faculty ownership" of the plan; and determining the status of current assessment activities. 2) Implementing the program. Department faculty establish educational objectives based on institutional mission; determine and implement assessment procedures; and measure educational outcomes. 3) Using the results for improvement. Results of assessment procedures are used to improve educational outcomes. This presentation will discuss the challenges presented along each of the above steps. In addition, methods of encouraging "faculty ownership" of the program and ways to encourage students to take assessment instruments seriously will be discussed.

 

Teacher assessment from student's viewpoint in Educational Development center (EDC) of Tehran University of Medical Sciences and Health services, from 1999-2003
Keywords: assessment, teacher, student's viewpoint
Authors: Dr.Fereshteh Farzianpour, Dr.Mohammad Ali Sadighi Gilani, Dr.Ali Akbar Zeinaloo
Institution: School of public health Tehran university of medical sciences and Educational development center

Summary: Educational assessment is the process of setting and providing evaluative-descriptive information on the value and significance of educational objectives, operation and the results, in order to direct the decisions, responses and information. The purpose for decision-making, aimed in this definition is to select the best out of possible choices by using information related to efficiency and accuracy of each choice. Responding means the ability to offer a persuading report on educational activities, their reasons, expenditure and effects. Supervision on an educational program or activity is to answer to the following questions:

1- To what extent has an educational program attracted certain learners?

2- Is the teaching-learning process, as well as giving its related services, being performed according to desirable programs?

3- Which sources have been to perform the program? The research general objective is to evaluate educational activities (theoretical and practical) of professor from learner's viewpoint during an educational semester, and to submit learner's impression in a confidential improve and promote the quality of education.

 

Reliability of MPLs set by examiners on an OSCE on two separate occasions
Keywords: minimum performance level, MPL, OSCE, must know / may know
Authors: Lawson, Douglas M, Harasym, Peter H.
Institution: University of Calgary

Summary: Standard setting methods for OSCEs are controversial. The purpose of this research project was to determine if examiners consistently set MPLs for OSCEs using the modified Nedelsky method of assigning must know/may know to items for the minimally competent candidate. Six-months apart, examiners set the MPLs for a 10-station OSCE administered by the Canadian Chiropractic Examining Board. The stations and cases were identical. All candidates were naive to the stations (no repeating candidates). Examiners received training on the setting of MPLs prior to each examination. Examiners set the MPL for the station to which they were assigned. Between cycles for each examination (noon and end of day), examiners were asked to identify the must know items on the 25 item rating form used for their station. The MPLs for the examinations were a summation of the must know items. This study found on average a 15-mark difference between the two standard setting procedures (3.36%). Approximately 22% of the candidates could be adversely affected, depending on which MPLs were used. This investigation provided evidence that examiner MPL decisions setting on OSCE are not stable from administration to administration and that alternative, more stable, methods should be used.

 

Standard Setting for Clinical Competence at Graduation from Medical School: is it possible to achieve consensus?
Keywords: Standard setting, Angoff, OSCEs, graduating examinations
Authors: Boursicot, K.A.M.1, Pell, G. 2 , Roberts, T.E.2
Institution: 1 Cambridge University, 2 Leeds University

Summary: While standardised tests of clinical skills, such as OSCEs (Objective Structured Clinical Examinations), have become widely used to assess clinical competence, the method of setting the pass mark varies greatly and there is no agreed 'best' standard setting process. There is a need for more quantitative evidence in this field. In our study, we compared the pass marks set for six OSCE stations using the Angoff method, for a graduating level examination, across five medical schools in the UK. The pass marks set for each of the six OSCE stations at the five medical schools differed significantly. The overall pass mark, derived from the six stations, varied between 47% and 60% across the different medical schools. In-depth analysis of the judges' scores on individual stations at each medical school and comparison of results across the schools will be presented and discussed. These results have serious implications for the outcomes of graduating examinations, in that students with the same level of competency would pass at one medical school, but would fail at another, even when the test is identical.

 

The effects of introducing two criteria for setting passing standards in a 3rd Year summative OSCE
Keywords: Standard setting, OSCEs, summative mid-course examinations
Authors: Boursicot, K.A.M.1, Evans, D.E.2
Institution: 1 Cambridge University, 2 Queen Mary Univeristy of London

Summary: At Barts and the London School of Medicine, students at the end of the 3rd Year are required to pass a 20 station OSCE covering basic clinical and communication skills before they are able to proceed to the 4th Year of the course. The borderline group method for setting the passing score was introduced in 2002. The final pass mark for the overall OSCE was calculated as the mean of the pass marks set for each individual station. The OSCE was fully compensatory, as each student's overall score was the summated mean of their individual station scores. This meant that poor performances on some stations could be compensated by highly scoring performances on other stations. In a few cases, some students achieved an overall score which was above the total OSCE pass mark, but in fact failed more than 50% of the individual stations. A second passing criterion was therefore introduced: in addition to passing on overall score, the students also had to pass a minimum number of individual stations. Analysis of the effects of introducing these 2 criteria on the numbers of students passing this 3rd Year OSCE will be presented and discussed.

 

Evaluating the CANMEDS Roles in an Internal Medicine Residency Program
Keywords: In Training Evaluation, CANMEDS
Authors: Rothman, A., Imrie, K.
Institution: University of Toronto

Summary: Does a recently introduced CanMEDs-based monthly in-training evaluation form produce valid results? At the end of each training month, in-training evaluation forms are completed for all Core-Medicine residents. In 2003-2004 a CanMEDs based ITER form was introduced and results of all completed forms entered into a data-base. The form contains 36 items grouped by CanMed role. Each item, except for those associated with the Professional role, requires a rating of exceeds, meets, or does not meet expectations. With the Professional role only the latter two options are used. We conducted an analysis of the results of all monthly in-training evaluation forms returned in the 2002-2003 academic year. Aggregate scores for each role were calculated. With the exception of the Professionalism score, the internal consistencies of the role scores were all greater than 0.90. Analyses of these data demonstrated growth in role scores across the 3 pgy years, consistency in role scores within pgy years; and demonstrated that role scores discriminated among residents. There were observed differences between the ratings of roles and in the ratings of competencies within roles. These results provide evidence of the validity of the scores from the Core Internal Medicine CanMEDs based ITER form.

 

The survey of nurses' viewpoint onContinuous Nursing Education Kermanshah Iran, 2003
Keywords: Continuous Nursing Education, Resources,
Authors: Jalali, R.
Institution: faculty of nursing

Summary: Continuous Nursing Education is considered as a means of proper reply to the rapid changes in health care delivery and promoting professional standards of current practice among the nurses. This study was conducted to investigate the nurses' opinion about continuous nursing education, and to determine the educational resources as well as nurses' educational needs. This descriptive cross sectional study was conducted on the nursing units in Kermanshah-Iran in 2003.The members of target population were one hundred nurses who provided the direct patient care in their units. The responses were measured by a single-item Likert scale. A total of one hundred questionnaires were studied. They had experience as the nurses for more than ten yrs (10.97). 36% of them were single and 64% were married. They spent not only more than 48 hours in continuous nursing education in two past years, but also 5.9 hours monthly. 65% of them used the textbooks, and 53% participated in the conferences to meet the immediate learning needs. On the whole, the motivation for their participation, obstacles in continuous nursing education and need to this educational programme were important from their viewpoints. Offering the best value for continuous nursing education is an important subject .In order to increase nurses' motivation and to minimize the obstacles, we should improve this educational programme by increasing the personnel numbers in the hospital, decreasing the workload, participating in the continuous nursing education programmes and giving special time for these programmes.

 

Faculty Performance Evaluation
Keywords: Faculty Evaluation
Authors: Hoy, Mary P., Ph.D.
Institution: The University of Health Sciences College of Osteopathic Medicine, Kansas City, Missouri USA

Summary: Performance evaluation of medical school faculty is difficult to conduct in an unbiased, consistent professional manner and subject to criticisms of subjectivity and invalidity. The University of Health Sciences has developed a model system which objectively identifies key responsibilities for faculty in the following domains: teaching; research; clinical practice; administrative and service. Each domain lists a goal (major end results of job), objectives (key measurable achievements) and performance levels. The faculty member and supervisor determine the applicable major end results of the job. Criteria for evaluation with clearly delineated performance expectations are presented. At the end of the evaluation period the faculty member is held accountable for performance, and ratings are conducted based upon evidence presented. This presentation will discuss the theory and principles leading to the development of the form and the criteria selected. Audience participation will be encouraged through small group activities.

Table 1 Performance plan major end results of job

*Identify the key end results for the position and list below in order or priority.

KEY MEASURABLE OBJECTIVES

*State specific objectives for achieving end results in terms of expected performance for this position.

PERFORMANCE LEVELS

*Provide a measurable statement expressing performance level.

TEACHING

1.Prepares for assigned lectures.

AS APPLICABLE

1.1 Two-three instructional objectives (using Bloom's taxonomy) are submitted for each lecture for inclusion in the syllabus.

% OF TIME ALLOCATION

1.1.1 No learning objectives are written or submitted... (1.1.2-4)

1.1.5. #4 and learning objectives are used by students.

 

Assessment of common errors in clinical evaluation in view points of students
Keywords: student, performance, evaluation
Authors: Khadivzadeh, T.
Institution: School of Nursing and Midwifery

Summary: The aim of present study was assessing the common errors of clinical evaluation in view point of students of nursing&midwifery school, Mashad, 2003. In this descriptive study 120 of nursing,midwifery, operating room students were randomly selected. Data was gathered using a questionnaire 2 weeks after the end of the clinical courses and receiving the course scores. validity was confirmed by content validity and reliability through test-retest and internal consistency. Descriptive & analytic statistics were used in data processing.Common errors in student performance evaluation in students reports include halo errors in 44%, central tendency error in 33%, positive and negative leniency error in 38% , similarity error in 12%, focusing on one criteria in 9% ,focusing on non operational (non practical) criteria in 48%. In view point of 21% of students evaluation wasn and in view point of 36% it was somewhat in relevance to the aim and content of the course. 11% reported practical exams at the end of the course. 45% believed their performance in the course had no effect on their course points. 62% believed the given course points aren their true points and 77% asked for revising and changing in student evaluation by teachers. There were relation between viewpoints of students with their grade point average. Revise in students evaluation methods based on course objectives, instructing the instructors on student performance evaluation methods is suggested are suggested.

 

Dimensionality of a Medical Licensing Examination Series
Keywords: Medical licensure, dimensionality
Authors: Shen, L.
Institution: National Board of Osteopathic Medical Examiners

Summary: Medical licensure examinations in the U.S. have traditionally been a three-exam series. Nevertheless, the common purpose and the integration of the three exams have hardly been operationalied. To operationalize the common measurement objective, the Comprehensive Osteopathic Medical Licensing Examination (COMLEX) uniquely requires all its three Level exams to have a common content outline while allowing different Levels emphasizing on different aspects of practice. This common-outline design assumes a unidimensionality of the whole exam series. The purpose of this study was to examine the dimensionality of the whole COMLEX examination series. Factor structure of the three COMLEX exams was studied by treating the test plan categories as hypothetically distinct tests. Disattenuated correlations among the categories were examined and a principle component analysis of the disattenuated correlations was performed. An item-level factor analysis, allowing for the detection of factors unrelated to the test plan categories, was also performed. The dimensionality of a subset of 100 items common in all three levels of the examination was studied in relation to the dimensionality of non-linking items. Results of this study encourage the concept that the knowledge component of the medical competence, regardless its broadness, may be considered as a single construct when it is operationalized carefully.

 

Should visual spatial perception tests affect residency choices
Keywords: choices, residencies, visual spatial perception
Authors: Martin, M.
Institution: McGill University

Summary: Visual spatial perception (VSP) tests have been statistically correlated with surgical techniques. Because of that certain residencies have looked into the applicaton of these tests as a screening tool for acceptance into their programs. We proposed that medical students, themselves, might select their residency program dependent on what they perceive to be their innate level of skill in VSP. We used as a cohort the final year medical students at our university and asked them to fill our a 30 minute abbreviated bonafide VSP test and then asked them which residencies they were applying to, as well as their gender. Statistical analysis was performed on the data obtained from 75 medical students who were equally divided along gender lines. Using a pre-ordained cut-off point, we showed that male students fared slightly higher than the female students. After categorizing and lumping the specialties along surgical versus medical lines, it was noted that female students who had a low score rarely applied to surgical specialties. This was not the case for the male medical students ( p= 0.20). Our results show that including VSP testing as part of an armamentarium of exams may be useful in advising students about their choice of residency selection. It should not, however, be used as a criteria of exclusion.

 

The Utility of Student Ratings of Instruction for Students, Alumni, and Medical Instructors: A "Consequential Validity" Study
Keywords: student ratings, instructor evaluation, validity
Authors: Beran, T. and Violato, C.
Institution: University of Calgary

Summary: Student ratings of instruction are widely employed in universities generally and medical schools particularly, across Canada and the United States (e.g., Greenwald, 2002). Indeed, student ratings of instruction are one of the most thoroughly studied forms of personnel evaluation. Most previous research has focused on psychometric properties such as reliability and validity of student ratings instruments as indicators of the quality of teaching and the overall effectiveness of instruction by individual instructors. These results have often been conflicting and contradictory (Arreola, 1995; Kulik, 2001); reliability is generally adequate but evidence for validity is mixed. To investigate the degree to which student rating information is useful (i.e., their "consequential validity"), students, alumni, and instructors from the Faculty of Medicine at a major Canadian university were surveyed. Of the 22 students and alumni, 19 (86%) stated that the ratings are somewhat or very useful to students in general. However, 16 (73%) indicated that they had never actually used the ratings to select courses or instructors. About half (n = 6) of the instructors (n = 11) gave favorable responses about their acceptance of the use of student ratings and their own use of the ratings to improve their quality of teaching. The results of the present study indicate that although there is general acceptance of the use of student ratings by both instructors and students, there is greater evidence of "consequential validity" from instructors than from students.

 

Quality criteria for portfolio assessment of undergraduate medical students
Keywords: portfolio assessment, quality criteria, validity
Authors: Overeem K, Driessen EW, Tartwijk J van, Vleuten CPM van der
Institution: Maastricht University

Summary: Aim of the study: Portfolios have gained wide acceptance as a learning and assessment tool. Yet, little research has been reported on the validity of these portfolio-assessments. The issue is whether assessors are influenced by the lay-out and writing of the portfolio when making their judgment. The main research question in this study was: are the quality criteria used by assessors when assessing portfolios valid and which criterium is the decisive factor?

Method: For this study, portfolios from undergraduate medical students were used. Based on in-depth interviews with assessors and literature, a scoring list with fifteen quality criteria was established. The criteria of this list could be divided into two groups: content and form. Two researchers have scored a stratificated sample of fourty portfolios, using the scoring list. The inter-raterreliability was counted with Pearson product-moment correlation. The correlation between the quality criteria and earlier judgment was examined with a regression analysis.

Results: Interraterreliability was acceptable with an average of 0,817 for the fifteen criteria. All criteria together accounted for 78% of the variance in assessors' judgments. The strongest predictor of the end-judgment seemed to be the quality of the reflections. This accounted for 66% of the variance.

Discussion and conclusion: This study shows that reflective skills in portfolios can be assessed in a valid way. Further research must make clear if other competencies than reflection can also be assessed in a valid manner with portfolio.

 

Measuring the Impact of Junior Doctor Education on Quality of Care
Keywords: junior doctor education, skill stations, competencies, quality of care
Authors: Copland, G.; McCormack, M;
Institution Gold Coast Hospital; Queensland Health

Summary: Adverse events are a common occurrence in the hospital setting. It is recognized that the incidence of adverse events can be reduced through education in key competency areas. At Gold Coast Hospital, the intern orientation program includes a number of highly interactive skill stations which are based on clinical scenarios. These have been identified as being critical to providing safe patient care, thus reducing the incidence of adverse events. Anecdotal evidence and feedback from previous years, suggests that the educational intervention provided through the skill stations at Gold Coast Hospital improves the skill level of interns in a number of core competency areas. These core competencies have been identified as being essential competencies for interns in the provision of safe patient care. Through the skill stations, the interns are exposed to clinical situations that address the competency area in a non-threatening, safe, interactive environment. These educational sessions are provided immediately prior to the commencement of clinical duties and responsibilities. Skill stations have been a routine part of the education training program for junior doctors at Gold Coast Hospital. The aim of our research has been to determine the benefit of this education program and the impact of the education on the level of competency, thereby insuring improved quality of patient care.

 

Assessing Short Course Outcomes from a Three Module Educational Program on Alzheimer's and Other Dementias
Keywords: short course evaluation, physician outcomes,
Authors: Lockyer, JM, Fidler H, Hogan DB, Pereles L, Lebeuf C, Wright B, Gerritsen C
Institution: University of Calgary

Summary: Background: Three three-hour educational modules were developed to facilitate family physician management of patients with Alzheimer's Disease. The modules covered diagnosis and pharmacotherapy; care of patients with mild to moderate dementia and late stage dementia. Teaching was done in small groups using interactive strategies (case based learning, role playing). Participants completed pre and post course (3 months) assessments as well as commitment to change statements at the end of the course with a follow-up at 3 months.

Purposes: To assess knowledge, comfort with management, and level of care provided before and 3 months after the modules. To assess physician adherence to commitment to change statements 3 months after the modules.

Methods: Paired sample t-tests were used to assess change in knowledge, comfort, and level of involvement in care. Frequency counts assessed physician implementation of commitment to change statements.

Results: 917 physicians participated in 123 offerings of the modules over 24 months. Knowledge scores improved for modules 2 and 3 (p=.003, d=.24; p=.000, d=.44, respectively) but not for module 1. Comfort with management increased for all three modules (p =.000; d=.78, p=.000, d=1.86; p=.000, d=.88, respectively). Level of involvement increased for all three modules (p=.000, d=.78; p=.000d=1.86; p.000, d=.88). For all three courses, between 52 and 57% of physicians were able to implement the changes they committed to making with another 29 to 32% partially implementing changes. Discussion: The modules had an impact on physician practice and their comfort managing patients with AD.

 

Incognito simulated patients for formative and summative assessment
Keywords: incognito simulated patients, assessment, consultation skills
Authors: Thistlethwaite, J.; Ridgway G
Institution: James Cook University

Summary:

Aims: To explore the acceptability and feasibility of using incognito simulated patients for formative and summative assessment in general practice and develop a code of practice for the process.

Context: The use of simulated patients is a well-established method for the training and assessment of medical students and doctors. In the Netherlands covert or incognito simulated patients are employed to assess the competence of general practitioners in the workplace. In the UK incognito simulated patients have not been used for assessment, nor has the practice been established for training in consultation skills.

Method: Two incognito simulated patients carried out consultations with eleven pre-registration house officers in five general practice settings. The simulated patients gave their views on the doctors' consultation skills. The doctors and the simulated patients were interviewed to explore their views on the acceptability of the exercise.

Results: The doctors did not object to the experience though they did have concerns. They valued receiving feedback on their skills. The simulated patients had varying views on the process. Ethical issues have been raised. There were some logistical problems in setting up the consultations.

Discussion: This appears to be a valuable additional method for training in consultation skills and assessment but a code of practice needs to be established so that doctors and simulated patients do not feel threatened by the process. Consideration needs to be given to the patient scenarios, the use of feedback and the assessment process.

 

Integrated Application of Patient Simulations Across Undergraduate Education, Entry-to-practice Assessment, and Continuing Professional Development: the Experience of Pharmacy in Ontario (Canada)
Keywords: Patient Simulation, Standardized Patients, Pharmacy Education
Authors: Austin, Z., Tabak, D., McNaughton, N., Robb, A., Marini, A., Croteau, D., MacLeod-Glover, N., Munoz, L., O'Byrne, C., and Pugsley, J.
Institution: University of Toronto, Ontario College of Pharmacists, Pharmacy Examining Board of Canada

Summary: Objective: To describe use of patient simulations for student, entry-level, and experienced pharmacists.

Design: Retrospective analysis of teaching, learning, and assessment strategies involving patient simulations by a university, a licensing examination body, and the regulatory authority in pharmacy.

Results: At all levels examined, similar approaches to patient simulations are used. Blueprinting of assessments is based on competency/outcomes documents, a combination of analytical and holistic assessments are used, standard-setting procedures are used, and consistent criteria are applied regarding communication assessment. At all levels, training and monitoring of assessors is undertaken to optimize reliability of assessment.

Conclusions: Patient simulations permeate all levels of pharmacy education and practice in Ontario. Collaboration between acdaemic, regulatory, and examining bodies ensures consistent, fair, and valid use of simulations for teaching and assessment purposes. Use of published educational outcomes/competency statements underlies patient simulation, and provides for meaningful assessment. Balanced use of holistic (global) and analytical (checklist) scoring at all levels provides a consisten approach to use of simulations within pharmacy.

 

Working and training as an intern: a national survey of Irish interns
Keywords: education & training, internship, national evaluation
Authors: Finucane, P.
Institution: Medical Council

Summary: In recent years, the Medical Council has sought to enhance the quality of education and training of interns in Ireland. Among its initiatives has been the production of a generic job description, the introduction of a log book so that individuals can monitor their progress and the setting up of a national 'Network of Intern Coordinators and Tutors' to supervise and further develop intern training. To evaluate the impact of these initiatives, the Medical Council undertook a postal survey of all Irish interns during 2003. Three hundred (65%) of 461 interns responded. In contrast to the experience of interns in other countries, the majority provided positive feedback on many aspects of their education and training, their work environment and their professional relationships. However, some problems were identified, including a lack of protected time for education, a lack of formal educational programmes, insufficient feedback on performance, and an unnecessarily stressful work environment. Overall, 61% reported being bullied and 4% had experienced sexual harassment. Although feedback on the internship experience in Ireland is generally positive, further work is necessary to address the problems identified. Ireland now has the necessary structures in place to promote even better Intern education and training.

 

Student technical skill compared to clinical decision-making and interpersonal skills
Keywords: Clinical decision-making, competency assessment, interpersonal skills, information gathering, clinical competency, intern education, behavioral ratings
Authors: Hvidsten, L; Hulbert, J; Moe, W; Berg, M
Institution: Northwestern Health Sciences University

Summary: Study design: Data from a clinical evaluation exercise entitled the Developmental Assessment (DA) were analyzed for rater reliability and association of subscales.

Summary of background data: The DA assesses third year chiropractic students in a 50-minute, standardlized-patient encounter. 23 competency variables were assigned to one of three theoretically based subscales: information gathering, clinical thinking skills, and interpersonal skills. This study asked. 1) How reliable are the ratings for these three subscales? 2) Do the 23 variables associate substantially and empirically in a confirmatory factor analysis? And 3) are the subscales themselves associated to some degree?

Results: Confirmatory factor analysis provided evidence for two of the three subscales. Factor loadings (.250 to .700) indicated that variables relating to information gathering were highly correlated, as were variables concerning clinical thinking. Interpersonal skill variables were less correlated. The confirmed subscales, information gathering and clinical thinking, were substantially correlated with each other (r= .49; p<. 001), indicating that these scores occurred together. Secondary evidence suggests that summative interpersonal subscale skills are also highly correlated with information gathering (r=.41; p<.001) and clinical thinking (r=.60; p<.001) scores.

Conclusion: The research showed strong empirical evidence that students were consistently skilled across the subscales; those highly skilled interpersonally were also skilled clinical thinkers and skilled information gatherers. The DA instrument appears to reliably assess clinical competence and could be adapted for other health care curricula. The research also supports the use of behavioral ratings in clinical training.

 

Assessing Interpersonal Communication Skills of Medical Students Using the Global Patient Assessment
Keywords: standardized patient, interpersonal communication, performance assessment
Authors: Errichetti, A.; Boulet, J.; Gimpel, J.
Institution: Philadelphia College of Osteopathic Medicine

Summary: COMLEX-USA-Performance Evaluation (PE) is a standardized patient (SP) examination developed by the National Board of Osteopathic Medical Examiners. Beginning in September 2004 all osteopathic medical students in the USA will be required to take this SP examination in addition to their written board examinations. COMLEX-USA-PE evaluates data-gathering skills (history-taking and physical evaluation examination), osteopathic medical treatment (OMT), interpersonal communication and patient management. This presentation will discuss the interpersonal communication scores derived from the Global Patient Assessment (GPA) during three pilots test of this examination conducted at osteopathic medical schools. The GPA is a six item rating scale of interpersonal communication skills. SPs complete the GPA along with data gathering checklists of history-taking and physical examination skills during a twelve station performance evaluation. Osteopathic physician raters evaluate the OMT and patient management parts. Dimensions rated on the GPA include active listening, eliciting information, giving information, empathy, respectfulness and professionalism. Score reliability using the GPA instrument has been fairly high. For example, in the most recent evaluation of 114 fourth year medical students during a twelve station examination, the generalizability coefficient was 0.85. The presentation will also discuss the training of SPs to use the GPA instrument.

 

Are Delusions of Competence and Incompetence More Than Regression Effects?
Keywords: self-assessment, regression effects, delusions
Authors: Albanese, M; Dottl, S; Mejicano, G; Zakowski, L; Seibert, C; Van Eyck, S; Prucha, C.
Institution: U. of Wisconsin Medical School

Summary: The purpose of this study is to determine to what extent the phenomenon where low performers over-estimate their performance on exams and high performers under-estimate their performance can be attributed to regression effects. After completing the exam, second year medical students (N=143) estimated their performance on the course final in an Infection and Immunity course (IIF) in terms of both percent correct and percentile rank. Second year grade point averages (M2GPAs) were combined with the IIF results to form five subgroups: 1=lowest third on IIF and M2GPA, 2=lowest third on IIF only, 3=neither lowest or highest third on IIF, 4=highest third on IIF only, 5=highest third on IIF and M2GPA. Results showed no statistically significant difference between subgroups 1 and 2 or 3 and 4, suggesting that regression effects do not account for the phenomenon of low performers over-estimating their performance and high performers under-estimating their performance. A replication in another course yielded similar results. Many forms of Problem-Based Learning (PBL), require students to assess their learning needs and set out a plan for meeting those needs. Recent studies have called into question whether low performing students can accurately assess their performance, finding that they tend to have substantially inflated perceptions of their capabilities. Determining whether these findings are real or an artifact of the research design is important to understanding the dynamic that underlies this finding. This study found that the tendency of the poorest performers to over-estimate their performance is not an artifact of the sampling design.

 

Medical School and Residency Performance as a Function of Discrepancies in MCAT Scores and Undergraduate GPAs
Keywords: MCAT, GPA, discrepancies
Authors: Albanese, MA.;Farrell, PM; Dottl, SL
Institution: U. of Wisconsin Medical School

Summary: This study examined whether inconsistency between MCAT scores and undergraduate grade point averages (GPA) related to their predictive validity. For 1992-2001 matriculates who had taken the MCAT (n=792), MCAT overall scores and GPA were standardized to z-scores (mean=0 and SD=1.0). Differences between the MCAT and GPA z-scores (z=zMCAT- zgpa) were plotted and correlated with: USMLE Step 1 and Step 2 scores, Years 1, 2 and 3 medical school GPAs, and Post Graduate Year 1 (PGY 1) residency director ratings. We also created 3-dimensional plots with undergraduate GPA and MCAT overall scores on the two horizontal axes and each of the six criterion scores, in turn, on the vertical axis. Statistically significant correlations were obtained between z and USMLE Steps 1 and 2, and medical school GPAs for years 1 and 2 (p<.001). The largest correlation was .29 (MCAT-Other GPA discrepancy correlation with Step 1). 3-dimensional plots indicated that z obscured a complex relationship at the extreme ends of the distribution. Low MCAT-high GPAs divided the distribution into two different groups showing high percentages of low Step 1 scores. High MCAT-low GPAs divided the distribution into three different groups showing a more complex pattern involving high percentages of low Step 1 scores as well as high Step 1 scores. These results suggest that predicting medical school performance using MCAT and GPA may require more complex methods than have typically been used. In particular, linear regression may need to involve complex interaction terms in the model.

 

Procedural Skills Levels of First Year Postgraduate Doctors
Keywords: Procedural Skills, Medical Council of New Zealand, general registration, skill requirements
Authors: Dr Andrew Old, Dr Stephen Child, Gill Naden
Institution: Auckland District Health Board, Auckland, New Zealand

Summary: Procedural Skills Levels of First Year Postgraduate Doctors In New Zealand – An International Comparison Dr Andrew Old, Dr Stephen Child, Gill Naden. The Medical Council of New Zealand (MCNZ) defines a number of clinical and procedural skills that are expected of a doctor at the end of the first year in order to gain general registration. In this study, we survey a group of junior doctors at the beginning and end of their first postgraduate year to gauge their self-perceived experience with a variety of clinical skills and conditions. At the end of the year there was a significant discrepancy between the skills expected by the MCNZ of those doctors and those skills actually attained as well as a small subset in which skills declined. A review of the literature compares New Zealand results with skill requirements of medical councils in other western countries.

 

Evaluation of basic sciences knowledge at the end of the medical course: a comparison between traditional and new curricula
Keywords: medical curriculum, evaluation
Authors: Karunaweera, Nadir D., Gamage, P., Mendis, Lalitha.
Institution: Faculty of Medicine, University of Colombo

Summary: The adoption of a new medical curriculum resulted in a drastic reduction of teaching hours of basic sciences.

Objective: To assess the knowledge of basic sciences at the end of the medical course and to make comparisons between students who followed the traditional and new curricula. An instrument (multiple choice question paper) was developed with separate questions in basic sciences subjects. The question paper was administered after the final examination. The study was done for 3 batches (batch I; traditional curriculum (AL93/94), (n=206) and batch II and III; new curriculum (AL94/95 (n=171) and 95/96 (n=171)). Computer-automated corrections were carried out. Marks obtained for each subject was analyzed separately and comparisons were made between batches. 162 (79%), 109 (64%) and 123 (72%) students participated in the study in batches I, II and III respectively. The average performance of all subjects was better (p<0.001) in students who followed the new curriculum (batch II=57.8+9.1; batch III=61.2+9.5) when compared to those of batch I (53.3+19.3). The performance was comparable in batches II and III. Individual subject marks had a similar trend with better performance shown by batches II and III, except in microbiology and parasitology in which performance of batch II was poor when compared to batch I. The reduction of traditional teaching (lecture and practical) hours of basic sciences in the new curriculum does not appear to adversely affect the basic sciences knowledge retained at the end of the medical course.

 

Gender validation of an OSCE
Keywords: gender, validation, OSCE
Authors: Tweed, M.; Thomspon-Fawcett, M.; Wilkinson, T
Institution: University of Otago, New Zealand

Summary: Validation should include subgroup score analysis. Gender is an important issue in a clinical examination as the gender of the candidate, patient and examiner(s) may influence the outcome. The effect of gender on results in an OSCE for 186 5th year students was studied. Two examiners mark students independently on a checklist score and a global score. We analysed the OSCE station scores allowing for gender of candidate, patient and examiner(s) individually and interactions. 13/15 stations included patients and examiners. Female students did significantly better on stations than male students with regard to global score (2.91 v 2.85, p<0.001) but not checklist score (18.6 v 18.4, p=0.4). Stations with female patients had lower checklist scores than those with male patients for (13.9 v 12.6, p<0.001) but equal global score (2.9 v 2.9, p=0.1). There is no evidence of an examiner/student gender interaction or patient/student gender interaction for the station scores (factorial ANOVA). The global score for a male examiner pair (3.0) was higher than a mixed pair (2.9) and a female pair (2.8) (p<0.001). The checklist scores for single gender examiner pairs were identical (13.7) but higher than a mixed pair (12.7) (p<0.001). It is reassuring that there is no interaction between the gender of the candidate, examiner and patient. Female students scoring better and mixed examiner pairs marking lower require further study.

 

Assessment by observed consultation: Validation of content
Keywords: consultaiton, content, validation
Authors: Tweed, M.
Institution: Department of Medicine, Wellington School of Medicine, New Zealand

Summary: Validation of content should be more than the content being deemed relevant by experts. It should also include the way in which the scoring of this content is interpreted. During the 5th year of 6, the General Medicine block students sit a 6 station OSCE. Included in this are 1 history taking, 2 clinical examination and 3 data interpretation stations. The consultation stations are graded on common scales for facets: history or examination technique; problem solving/diagnostic ability; and patient relationship. These attributes are deemed relevant by faculty. Pass/fail decision is made using accumulated underperformance with these facets being of equal weighting. Pass mark verification includes global and borderline score techniques. If the result interpretations are valid, each of these facets should contribute to the overall decision. Logistic regression analyses for prediction of global and borderline scores were used to assess relative contribution to this decision. The standardised coefficients demonstrate that only half the variance in the pass/borderline/fail or global score is attributable to the grading for these attributes. Also, although it is agreed that a professional relationship with a patient is important, it does not contribute to the pass/fail decisions. In 2004, patient relationship is not to be marked on a common scale, but as a veto score and the grading scales for other facets are being extended.

 

Accumulated underperformance as a method to convert OSCE station scores into a pass/fail decision
Keywords: OSCE, pass/fail decision
Authors: Tweed, M.
Institution: Department of Medicine, Wellington School
of Medicine, New Zealand

Summary: Different methods of producing OSCE station passmarks and combining station scores may lead to differences in pass/fail decisions. This needs to be considered when developing and comparing marking schemes. 5th year students are assessed by an OSCE during the Medical subspecialities block. The scoring scheme included using accumulated underperformance to generate a pass/fail decision. Each OSCE consisted of 3 data and 3 observed consultation stations. Each data station produced a single grade(A-F). Each consultation station included a grade for: history or examination technique; diagnostic reasoning/problem solving; and patient relationship. Hence each student was awarded 12 grades. Pass mark verification included pass/ borderline/ fail and global scale. Satisfactory performance for a graduating student was graded C. For each grade below C the student accumulated a weighted underperformance mark (D=1, E=2, F=3). These 12 marks were summated. As these were penultimate year students the faculty set the accumulated underperformance pass mark at 12. For comparison a compensatory method, used for other examinations, was also applied generating a pass mark average of C's (36/72). Both pass/borderline/fail and global scale indicated that faculty pass mark was set lower than the examiners perceived. By these methods the accumulated underperformance pass would be lowered from 12 to 6, the compensated pass mark raised from 36 to 43. Depending on the pass-mark thresholds, pre-set or adjusted, compensatory and accumulated underperformance methods, will pass/fail different students. The process is being reviewed and developed.

 

Multitrait-multimethod matrix validation of OSCE results
Keywords: validation, OSCE
Authors: Tweed, M.
Institution: Department of Medicine, Wellington School of Medicine, New Zealand

Summary: Validation evidence may include multitrait multimethod matrices (MTMM). This involves correlations between different attributes assessed in different ways. During the 5th year of 6, students on the General Medicine block take an OSCE. This includes 3 data interpretation (patient summaries with common investigations) and 3 consultation stations. The consultation stations are marked on: history or examination technique; problem solving/diagnostic ability; and patient relationship. During 2003 respiratory and cardiology were most frequent subspecialities represented on stations. A MTMM matrix was produced for these consultation and data interpretation results. Matrices for data results with consultation scores overall and with problem solving/diagnostic ability were produced. Even within the same subspecialities, data questions may not assess common attributes to the consultations, even the problem solving/diagnostic component. This may be due to the fact that the data questions are described in the context of acute illness (done to ensure coverage), where as the consultations are all with people with chronic problems. The positive correlation for data questions may be a true finding, as these questions assess common attributes (consultation and management of acute illnesses) irrespective of subspeciality. Context specificity may be apparent between acute and chronic presentations as well as subspecialities.

 

Effect of a rotating modular curriculum on examination results
Keywords: modular curriculum, assessment
Authors: Tweed, M.
Institution: Department of Medicine, Wellington School of Medicine, New Zealand

Summary: 5th year medical students at Wellington School of Medicine rotate through 6 clinical blocks. During the General Medicine block the students sit an end-of-block OSCE. At the end of the year all students sit an end-of-year OSCE and written examination covering all specialities. Does rotation affect OSCE outcome? Marks for individual students were calculated as proportion of SD above/below mean for each OSCE's. As groups are not necessarily randomly allocated, the difference in the SD score between the end-of-year OCSE and end-of-block OSCE was used. The mean difference between examination scores for each block was calculated. There were correlations between the actual % scores (r=0.46, p<0.001) and the SD score (r =0.47, p<0.001) for the end-of-block and end-of-year OSCE's. The mean difference in the end-of-year and end-of-block SD score was significantly different across blocks (ANOVA, p=0.02). Through the course of the year there was a trend for the students to do worse in the end-of-year OSCE compared with the end-of-block OSCE (regression analysis, p=0.06). This implies that the 2 OSCE's assess some common attributes. The change in OSCE performance through the year has many possible explanations. This may include: faculty/testing factors such as variance in the end-of-block OSCE through the year; student factors such as doing the medicine block first may give false reassurance; or doing it last may result in the student reviewing other specialities rather than preparing for the end-of-block OSCE.

 

Does changing clinical module assessments affect outcome in year assessments?
Keywords: module, assessment
Authors: Tweed, M.
Institution: Department of Medicine, Wellington School of Medicine, New Zealand

Summary: Following 3 years at a central campus, University of Otago students study at 3 clinical schools, rotating through various clinical blocks. 5th year students sit a common examination including an OSCE and written papers, which includes a significant contribution from General Medicine. The Department of Medicine at 1 school changed it's 5th year block assessment from a written format to an OSCE. This was chosen for several reasons: to familiarise the students with this format; to include observation of clinical performance in the assessment; and to encourage students' time in clinical areas. The consequence of changing the block assessment on year assessment was studied.

The marks in the OSCE and the written examinations from the last 3 years for each school were analysed (factorial ANOVA). Although there are marked differences in scores between years (p<0.001) there is no difference in the OSCE score between schools. However there is a difference in the score in the written examination (p<0.001). Passing scores were not constant over the three years. Although the change did not improve the OSCE score, the lack of a practise written examination did not adversely affect the students. Curriculum, teaching and assessment changes continue at all schools but we should try to ensure that no students are disadvantaged and that advantageous developments are shared.

 

Student perceptions on the content balance and relevance of curriculum
Keywords: curriculum content
Authors: Tweed, M.; Jackson, J.A.
Institution: Wellington School of Medicine, New Zealand, Leicester Warwick Medical School, University of Warwick UK

Summary: Leicester Warwick Medical School curriculum comprises Phase 1, campus based integrated biological, social and clinical science modules, and Phase 2, clinical community and hospital based teaching. Phase 1 module leaders, practising clinicians and senior Phase 2 students are able to perceive relevance of module content to clinical practice. Previous Warwick-based Phase 1 student general feedback suggested that there was an excess of 'sociology' with no relevance to clinical practice. We explored this further. At the end of each module, students completed 8cm visual analogue scales (VAS), to identify their perceptions of the biological, clinical and sociological content and relevance to Phase 2 and clinical practice (0=very relevant). For each student the biological, clinical and social VAS scales are combined to form triangle, the centroid of which is used as a representation of the overall perception. All student results are plotted to represent class perceptions. 1288/2128 (60%) VAS sheets were completed. Individuals marked the VAS consistently. There was considerable variation between modules for both the content and perceived relevance to Phase 2 and clinical practice. Clinical content was seen as relevant, sociology was not. There was a good correlation between perceived relevance to Phase 2 and clinical practice (r=0.91, p<0.001). The mean VAS marks for the module shown was 4.53 for relevance to Phase 2 and 4.50 for clinical practice. For Phase 1 modules, rather than abandon student perceived irrelevant content, the relevance, especially "sociology", needs to be improved.

 

Improving examiner consistency in an assessment of advanced life support (ALS)
Keywords: life support, examiner consistency
Authors: Tweed, M.; Stephenson, B.; Perkins, G.
Institution: Wellington School of Medicine, New Zealand, Medical School, University of Birmingham, UK

Summary: Many UK healthcare postgraduate training programs require the successful completion of an ALS provider course. Course assessments include observation of candidates dealing with cardio-pulmonary arrest using mannequins. These assessments have become high-stakes examinations. Previously examiner inconsistencies were demonstrated. The assessment process has been altered with performance criteria checklists and dual examiners. We report a re-evaluation. Using real ALS course assessments we produced a videotape that consisted of 5 defibrillation tests (including 1 repeat) and 3 CASTest scenarios. This was shown to 40 ALS examiners at 3 different centres in the UK. Individual examiners completed the checklists. Individually and then as a pair, examiners gave a pass/fail decision. Mark sheet records were used to assess consistency. Intra-examiner agreement for observation for criteria was excellent (equivalence 0.97, kappa 0.79). Pairing examiners improved pass/fail decision agreement (equivalence 0.73, kappa 0.43 v equivalence 0.65, kappa 0.22). Paired examiners reduced the pass rate (0.58 v 0.55). Differences between examiners observations and allocation of a pass fail decision accounted for more variability than the choice of test. Candidate background did not affect pass rate. Some but not all observed errors predicted failure.
Although there are still inconsistencies in observation and interpretation, the new marking scheme and paired examiners appears to have improved examiner consistency. Further work is required to refine the assessment process but in doing this the main aim of the course, to improve outcomes for those requiring ALS, should be considered.

 

Prof Does the time taken to complete a written examination influence the result?
Keywords: time, written examination, marks
Authors: Kwizera, E.
Institution: University of Transkei

Summary: Enoch N Kwizera and Julio H Aguirre. Department of Pharmacology, University of Transkei, Umtata, South Africa. Introduction: Little research has been reported on the relationship between the time it takes to complete a written examination and the mark a student obtains. We therefore sought such a relationship using a Pharmacology examination for third year medical students.

Methods: The time taken by the students to complete a 3-hour Pharmacology examination was recorded unobtrusively. In addition, the student's gender and ethnicity were recorded. The marking of the examination was blinded, and the marks obtained were matched to the time the student took to complete the examination.

Results: 44.87% of the students completed the examination in 2 – 2.5 hours (group II), compared to 16.67% completing it in less than 2 hours (group I), or 38.46% in 2.5 – 3 hours(group III). There were no statistically significant differences in the mean exam marks obtained by each group, although the probability of a student in group II or III failing the examination was ten times higher than for a student in group I.

Conclusion: The data from this study indicate that although the time it takes a student to complete the examination does not seem to influence the mark the student obtains in the examination, the student completing the examination earliest is less likely to fail.

 

Prof Preparedness of final year medical undergraduates for internship: experience from the University of Transkei, South Africa
Keywords: PBL, community-based education, clinical clerkships, preparedness for internship
Authors: Kwizera, E.
Institution: University of Transkei

Summary: EN KWIZERA, AB NGANWA-BAGUMAH and EL MAZWAI. Faculty of Health Sciences, University of Transkei, Umtata, South Africa. The University of Transkei runs a Problem-based Learning and Community-based Education medical curriculum. As part of ongoing curriculum evaluation, we sought the views of the 2002 final year MBChB students as to how they rated their training. The Association of American Medical Colleges Graduation Questionnaire was adapted to collect the data. The role of Basic Sciences preparing respondents for clerkships was considered inadequate for Biochemistry, Genetics, Neuroscience, and Histology. By contrast, Pharmacology, Physiology, Anatomy, and Microbiology were rated highly. There was dissatisfaction with clerkships in Radiology, Emergency Medicine, and Neurology, but clerkships in Obstetrics & Gynaecology, Psychiatry, and Paediatrics were rated highly. Areas which respondents felt had been inadequately covered included: geriatrics, nutrition, medicine and the law, occupational medicine, genetics, complementary medicine, human sexuality, family / domestic violence, and terminal care. Acquisition of communication skills had been adequately addressed. Community-based clinical training was rated highly, and the majority thought community based clinical training was better than training in other settings. Important aspects of professionalism had been adequately covered, and the students were confident that their training had adequately prepared them for Internship. Respondents' scores on the various questionnaire items closely matched those of the All American Medical Schools graduates for the year 2001.

 

The Impact of Differential Time Limits on Scores from a Computer-delivered
Keywords: performance assessment, computer-based testing
Authors: Margolis, M.J., Clauser, B.E., Harik, P.
Institution: National Board of Medical Examiners

Summary: The Impact of Differential Time Limits on Scores from a Computer-delivered Medical Performance Assessment Context and Purpose. The growing popularity and increased cost of computer-administered examinations make issues relating to testing time more critical. The present study was intended to investigate timing issues by experimentally manipulating the allotted time on a complex medical performance assessment.

Methodology: Data were from the computer-based case simulation component of the United States Medical Licensing Examination. Nine cases and three timing conditions were examined: 15, 20, and 25 minutes (the standard time). Examinees were randomly assigned to cases and timing conditions within cases. To avoid confounding learning effects with timing, all data were collected in the eighth of nine case sequence positions on the test. ANCOVA was used to investigate score differences across timing conditions; whether timing impacted the relationship between case scores and proficiency estimates was also investigated.

Results: Significant score differences were found between the 15 and 20-minute conditions for all but one case and between the 20 and 25-minute conditions for two cases (for one of these, scores were higher in the 20-minute condition). On average, correlations between case score and examinee proficiency decreased as testing time increased.

Conclusions: Decreasing testing time by five minutes would not have a significant impact on performance. Implications of these findings are that: a) an equally-reliable test could be administered in less time; and b) reducing testing time and adding additional cases could lead to improved test reliability.

 

Therapeutic decision skills at undergraduate level: Script Concordance test-based or written simulation-based assessment? A French pilot study
Keywords: Therapeutic decision skills, assessment.
Authors: Louis SIBERT, Francis ROUSSEL, Jean DOUCET, Jacques WEBER, Joël LECHEVALLIER.
Institution: Department of Medical Education, Rouen Uninversity Medical School, Rouen, France.

Summary: Context: Summative French assessment of therapeutic decision skills is written simulation-based, performed at the end of the sixth medical year. This approach has some limitations as regards test standardization and scoring objectivity.

Objectives: To compare the score and rank obtained by each student with two written assessment tools of therapeutic decision skills, based on the same educational objectives: a Script Concordance test and the above mentioned examination.

Methods: An 85 items Script Concordance test and a 6 patient-Management-Problem examination were administered to 92 students of the same Faculty. Pearson and Spearman correlation coefficients were estimated to compare the individual student's scores and ranks for both examinations. Reliability analysis was evaluated with Cronbach's alpha coefficient.

Results: Mean examination time was 65 minutes for the Script Concordance test and 3 hours for the Patient-Management-Problen test. Reliability coefficients were 0.727 and 0.709 respectively. The average correlations between the individual student's scores and ranks were 0.24 (p<0.02) and 0.31 (p<0.01) respectively, demonstrating a moderate but significant correlation between the results of the two examinations.

Conclusions: This study shows that Script Concordance test is able to discriminate therapeutic decision capacities of candidates, as well as, the summative French pre-residency examination. Modest resources are required to develop it. Script Concordance test permits a standardized assessment of reasoning process in the context of ill-defind problems, which is the hallmark of professional competence. These findings address issues regarding strategies of assessment of therapeutic decision skills at the end of the medical curriculum in France.

 

Validity of Professional Skills Programme (PSP) examination scores for predicting medical students' performance in the clerkship phase
Keywords: Predictive validity, Professional Skills Program, Assessment, Canonical correlation
Authors: Al-Jishi,E, Hamdy, H, Prassad, K, Fathi, A, Salih
Institution: Arabian Gulf University-Kingdom of Bahrain

Summary: Context: The developed PSP at Arabian Gulf University id to prepare the students for the assessment method used to predict future performance.

Aim: To assess the predictive validity of the assessment of PSP and students' based knowledge to predict the performance at the clerkship phase

Method: Scores of (110 students) of pre-clerkship students on PSP examination components (history taking & physical examination) and the B.Sc. written examination were correlated with their subsequent score on the components of the clerkship assessment(clinical, OSCE,& written).The relationship was assessed using simple correlation, regression analysis and canonical correlation.

Results: Simple correlation analysis showed an r of (1028-0.48) for the B.Sc. written and r of (0.10-0.37) for the physical examination component to predict clerkship clinical (r=0.382). Multiple regression indicated that PSP assessment and written examination, taken together, were significant predictors of each clerkship component (R=.38,.55,.56). Two of the canonical correlations (.60,.24) were significant. The clinical skills component dominated the composition of the first clerkship canonical variate, and the B.Sc. written dominated the corresponding pre-clerkship variate. The sceond canonical variate showed two coefficients, and these coefficients have opposite signs.

Conclusion: performance in PSP could predict subsequent students' performance. The knowledge tests are better predictors of subsequent performance. Furthermore, the univariate analysis may lead to errors when a ultivariate is the appropriate procedure.

 

Quality of educational units in a large university hospital
Keywords: Evaluation, quality, education
Authors:
Tutosaus J, Durán I, de la Higuera JM, Díaz-O J, Morales-Méndez S, Barroeta J.
Institution: Hospitales UU. V. Rocío

Summary: Objectives: To perform a quantitative and qualitative evaluation of the different specialized educational units in our hospital. 

Material y methods: Using a previous Delfhi analysis we established 28 parameters and their respective values which were analysed in a series of pilot units. The values obtained range between 5.17 (former resident physicians currently unemployed) and 8.72 (involvement of all the staff in the educational process) points over a maximum possible score of 10 points. In the present study these criteria are applied to all hospital units. The parameters include a series of quality criteria grouped in three subgroups: structural criteria (availability of diagnostic/ treatment equipment, software, etc), criteria of process (involvement of the unit staff in the educational process, availability of reports and establishment of objectives, clinical rounds, etc) and criteria of result (number of patients seen by the resident physician, number of articles and papers he/she has published and presented, etc). 

Results: The scores obtained range between a minimum of 23.0 and a maximum of 231.9. All the details that shall be internally communicated to the hospital in June 2004 have not been completed yet. However, by the time the 11th Conference is held in July 2004 further data will be available. Conclusions: From the preliminary data so far available we observe that there is a great variety of values, even though the study is carried out in a single hospital. This seems to be associated with the different historical development of the 41 specialties under analysis.

 

Holding Educational Rounds Electronically: Seven Medical Schools Engaged in a Continuing Dialogue
Keywords: Educational Rounds, Intermedical school collaboration, Online learning
Authors: P. Niall Byrne, Ph.D., Professor Emeritus, University of Toronto, M. Cusimano, MD, Department of Surgery, University of Toronto, S. Ginsburg, MD, Department of Medicine, University of Toronto, M. Marks, MD, Department of Medicine, University of Ottawa, B. Sadovy, University Health Network
Institution: University of Toronto

Summary: In 1997 all 5 Ontario Medical Schools created a videoconference program of monthly rounds focused on major issues in health professions education. The 5 founding schools, McMaster, Ottawa, Queen?s, Western and Toronto were subsequently joined by a newly established Northern Ontario Medical School and Technion Medical School in Israel. Toronto undertook to present 5 rounds per annum with 1 round from each of the other schools. The rounds are telecast, using a split screen at each site displaying the other participants. Each round involves an expert presenter and a discussant, followed by questions and discussions for 30-45 minutes. The rounds are accessible by the internet (http://cre.med.utoronto.ca/omen/omenrounds.htm). Attendance at these rounds provides continuing education credits to both specialist and family doctors. Examples of topics presented and discussed are: "The SARS Experience: Balancing Risk and Need in Medical Education"; "Assessment across the Continuum of Medical Education: Widen the vision and do the Doable!". Evaluations of the rounds are invariably positive. A description of some of the outcomes of these rounds will be offered.

 

A qualitative study of the impact on learning of the mini-clinical evaluation exercise in postgraduate training
Keywords: assessment, Mini-CEX, qualitative analysis
Authors: Alves de Lima, A., Henquin, R., Thierer, J., Paulin, J., Lamari, S., Belcastro, F., Van der Vleuten, C.
Institution: Instituto Cardiovascular de Buenos Aires

Summary: The study was designed to illustrate how residents perceive the Mini Clinical Examination Exercise as an assessment tool and it's influence on their approach to learning and studying. A phenomenographic approach was applied. All 16 residents from a cardiology training program in Buenos Aires were included. Results show that in all cases residents demonstrate an intrinsic interest in the subject matter. They show self-regulating strategies when required to select, relate and make critical appraisals of their own. They consistently demonstrate an aim to build a relationship between individual experience and their chosen topic. The residents feel comfortable because it melds with their routine. Residents find the Mini Clinical Examination Exercise to be a useful assessment tool with a favourable influence towards a constructive approach to study and learning.

 

Students' perceptions of peer physical examination: results from a qualitative analysis of free-text questions
Keywords: Peer physical examination
Authors: Collett, T.J, Bradley P., Rees C.E., McLachlan J.C.
Institution: Peninsula Medical School, Universities of Exeter and Plymouth

Summary: This study presents a qualitative analysis of students' perceptions of peer physical examination (PPE). 308 first year medical students from two consecutive cohorts at the Peninsula Medical School UK completed the Examining Fellow Students questionnaire 1. The questionnaire contained three free-text questions asking students about their views of PPE. Students' comments were analysed using the qualitative data analysis programme N5. Students perceived that PPE would broaden their understanding of anatomy, provide them with the skills and confidence to examine patients professionally and give them the opportunity to empathise with the experience of being a patient. PPE was seen as offering a 'safe space' in which to make mistakes and overcome embarrassment and the use of real live bodies was regarded as aiding the process of learning. Concerns about PPE included anxiety about private areas; issues related to negative self-image and worry that peers might betray trust. Concerns around previous physical abuse, religious beliefs and being physically harmed during PPE were also raised. Students stressed the importance of good supervision, the need for a professional ethos amongst their peers, the value of informed consent and the right to withdraw from PPE without pressure. Our findings indicate that although students see PPE as a valuable learning method, social and cultural sensitivity is required in the development of educational programmes.

1. O' Neill PA et al. Medical Students' willingness and reactions to learning basic skills through examining fellow students. Medical Teacher 1998;20:433-437

 

Work environment of residency programs in developing country set-up
Keywords: Residency; environment, work; developing countries; sexual harassment; communication; Analysis of variance.
Authors: Raza, S.
Institution: The Aga Khan University

Summary: In developing countries there is lack of empirical investigations about work environment of residency programs. This lack of information is major impediment in their improvement. We collected information which would reflect the working conditions of residents as perceived by them. A cross-sectional survey was conducted in four teaching hospitals of Karachi from July 1999 to January 2000. Responses of residents were obtained on 5-point Likert scale. Indices were formed for three components of work environment: academic, mistreatment and communication skills. Communication skills were composed of informative, affective and professional indices. Differences between residents' groups were assessed through analysis of variance (ANOVA). A total of 341 registered residents responded. Surgical residents were working more than 80 hours per week. Medical group residents were spending the highest actual time on research and teaching activities (10% and 14%). Academic index score was highest for surgical group (15.81, SD = 4.69) and lowest for support group (11.82, SD = 4.80). Medical group had highest perceived mistreatment index score (5.56, SD = 4.57). Patient-related communication index score (informative and affective) was highest for medical and surgical residents. Surgical residency programs were providing relatively better work environment. Most of the residents recognized undergraduate teaching, grand rounds and seminars or workshops as contributing to their academic learning. Reporting of sexual harassment was low, indicating either underreporting or cultural dynamics of our setting. The high scores achieved by surgical and medical residents for patient related interaction skills suggest adoption of better communication strategies.

 

PHAST (Pre-Registration House Officer Appriasal and Assessment in Scotland): A useful tool for evaluation the performance of doctors in their 1st postgraduate year
Keywords: screening tool, education, postgraduate doctors
Authors: Walker, Kim, Anne Hesketh, Fiona Anderson, Chris Driver, David Marshall, Geoff Orr, Gellisse Bagnall and David Johnston
Institution: NHS Education for Scotland

Summary: A 360o diagnostic screening tool (Friedman et al 2004) has been developed as part of an appraisal and assessment system for Pre-registration House Officers (PRHO). The questionnaire format of the screening tool rates the performance of PRHO under the domains as specified by the General Medical Council in Good Medical Practice (2001). The revised 360o questionnaire is currently being implemented for 30% of the PRHO throughout Scotland giving each an individual profile of their performance. Three key aims of the evaluation of the 360o tool are to establish: The value of the feedback to the PRHO The usefulness of the feedback for appraisal purposes The effectiveness of the tool in identifying poorly performing PRHO The initial data from the evaluation shows: PRHO valued the multi-professional feedback, which gave them an individual profile of their performance. Educational Supervisors found the feedback profile useful for appraisal purposes. PRHO considered the ratings given to them to be fair. Giving PRHO the responsibility of distributing the questionnaires has made the implementation feasible The 360 o was helpful in identifying areas of poor performance This information, together with the results of the full evaluation of the PHAST system, will help to inform its future role as part of the educational management of PRHO.

 

Validation of the SETOC Instrument
Keywords: Student Ratings, Reliability, Validity, Faculty Evaluation, Outpatient settings
Authors: Rukhsana W Zuberi, MD, FCPS, MHPE, The Aga Khan University, Pakistan, Georges Bordage, MD, PhD, University of Illinois at Chicago, USA, Geoffrey R Norman, PhD, McMaster University, Canada
Institution: The Aga Khan University

Summary: Validation of the SETOC Instrument - Student Evaluation of Teaching in Outpatient Clinics

Aim: An evaluation instrument, the SETOC—Student Evaluation of Teaching in Outpatient Clinics—was developed to provide specific feedback to faculty who teach in outpatient clinics. The purpose of the study was to determine the reliability and validity of student responses before putting the instrument into use.

Methods: The 15-item, single-page SETOC instrument uses a 7-point Likert-type rating scale and consists of five subscales: establishing a learning milieu, clinical teaching skills, general teaching skills, clinical competence, and an overall global rating. The instrument was administered to students (n=224) by course coordinators across clinical disciplines (k=9) in outpatient clinics at the Aga Khan University Medical College. Student ratings were anonymous and faculty names were coded by departments.

Results: Inter-rater generalizability coefficients of student ratings were 0.92 for the SETOC overall and >0.89 for each subscale. A single large factor was obtained by Factor Analysis that explained 80.8% of the variance. Four factors were extracted by orthogonal rotation to identify lower order factors, which conformed to a learner-centered factor, instructor-centered factor, learning milieu factor and an "uninterpretable" factor of miscellaneous teaching behaviors. These factors were different from the SETOC subscales but similar to those reported by student ratings of college faculty in and outside of medicine.

Conclusion: While students reliably perceived faculty teaching skills as a single dimension, there were however lower order factors that can be useful for individual feedback.

 

Formative Assessment and Feedback Used as an Aid to Learning in the Renewed Curriculum at the Aga Khan University
Keywords: Formative Assessment, Feedback and Student Learning
Authors: Rukhsana W Zuberi, MD, FCPS, MHPE, The Aga Khan University, Pakistan, Rashida Ahmed, MD, FCPS, MHPE, The Aga Khan University, Pakistan
Institution: The Aga Khan University

Summary: Formative Assessment and Feedback Used as an Aid to Learning in the Renewed Curriculum at the Aga Khan University

Aim: To improve the formative assessment system and provide regular systematic feedback to students for individualized academic growth and enhancement of learning.

Introduction: The undergraduate medical curriculum at the Aga Khan University (AKU), Karachi, Pakistan, was reviewed in 1999. A major criticism was the infrequent formative assessments and inadequate feedback to students regarding their performance. Curricular Renewal at AKU focused on this issue among others, and the renewed curriculum was implemented from the fall of 2002.

Methods and Results: The assessment system was reviewed and formative assessment tools were developed for all domains: cognitive, psychomotor and affective. For the cognitive domain, graphic feedback is provided on student achievement on short answer and best choice questions based on module objectives. For psychomotor skills, continuous observation with on-the-spot feedback and year-end Objective Structured Clinical Examinations based on objectives are held for feedback purposes only. The affective domain is assessed continuously during small group tutorial sessions and formative feedback provided at the end of each session, as well as mid-module individualized feedback sessions for every student.

Conclusion: All aspects of learning are assessed formatively and feedback is provided to students as an aid to learning in the renewed curriculum at the Aga Khan University, Pakistan. Students and faculty have provided their views on the ongoing formative feedback systems.

 

Outcome Based Procedural Skills- implementation and evaluation
Keywords: procedural skills, outcome based education, evaluation
Authors: Carr, S.
Institution: University of Western Australia

Summary: A junior doctors ability to function as an intern is determined by competency and experience in clinical and procedural skills as well as scientific knowledge. Fitness to practice- the main outcome of undergraduate medical training is an important issue for universities. At the University of Western Australia in late 2002 the results of outcome evaluation of three cohorts of interns confirmed that many graduates feel well prepared to take histories and perform physical examinations but less well prepared to perform practical and procedural skills. At the same time the Postgraduate Training Accreditation Committee came to a similar conclusion and wanted to establish which procedural skills were essential for day one of internship and which skills should be taught during the early postgraduate years. As a result a blueprint of skills taught in the medical curriculum was developed and a survey of postgraduate teaching committees at all teaching hospitals was conducted. Results of the survey transposed over the skills map lead to the identification of the skills required for day one of internship that are currently not well taught and/or assessed in the undergraduate curriculum. The skills map and survey results enabled the development and implementation of an outcome based procedural skills training program in the last three years of the six year undergraduate course. The purpose of this workshop is to describe the processes and evaluation results obtained to date and discuss issues related to implementation of skills training programs.

 

Triangulation of assessment tools explains variance between PBL groups regarding their perception of group work
Keywords: triangulation, problem-based learning, multiple choice, triple jump
Authors: Herzig, S.; Afhakama, K.; Matthes, J.; Tekian, A.
Institution: University of Cologne, University of Illiois at Chicago

Summary: Background: Problem-based learning (PBL) is widely used, but no single assessment tool ideally measures PBL outcome (Nendaz and Tekian, TLM 1999;11:232-243). In our PBL course of medical pharmacology, a marked variance between learning groups regarding the PBL process did not correlate with written exam results (Matthes et al., Naunyn-Schmiedebergs Archives Pharmacol 2002;366:58-63).

Research Question: Are differences between learning groups regarding the PBL process associated with different learning outcome, assessed by triangulation?

Methods: Process was measured by a 26-item student questionnaire, administered within three of ten PBL sessions of n=15 learning groups. The questionnaire yielded seven reliable scales. Learning outcome was assessed by 60 Multiple Choice Questions (MCQ, r=.85), by Tutor Assessment of individual student´s performance (TA), using four questions (Smith et al., Acad Medicine 2003;78:97-107) during PBL sessions (alpha=.81-.91), and by a structured Triple Jump exercise (TJ, case-based selection of drug therapy and justification, alpha=.58).

Results: At individual students´ level (n=132), test formats correlated weakly (MCQ versus TA: r=.36, MCQ versus TJ: r=.20, TA versus TJ: r=.03), indicating separate entities. Process variables differed significantly between groups (ANOVA). This variance was explained in part by the outcome measures (independent variables, IV). Significant regression models were found for "Judgement of PBL" (IV: TJ, r2=.22) "Team work" (IV: TJ and TA, r2=.38), "Interest in subject matter" (IV: TJ and MCQ, r2=.68) and "Tutor expertise" (IV: TJ, r2=.24).

Conclusion: In contrast to any single assessment method, triangulation explains the variance between PBL groups regarding their perception of the learning process.

 

Combining problem-based learning and information technology: an experiment with third-year medical students
Keywords: education, medical, undergraduate; problem-based learning; evaluation studies; Internet; students, medical; teaching;
Authors: Anita Burgun , Stéfan J. Darmoni, Franck Le Duff, Jacques Weber
Institution: Laboratoire d'Informatique Médicale,
Medical School, University of Rennes, 35043 Rennes Cedex, France

Summary: Objective: The Schools of Medicine of Rennes and Rouen in France have developed a new educational program (PBL-in-MI) that intends to exploit the synergy between PBL and medical informatics. Our objective was to experiment and to assess PBL as a method for teaching ICT.

Methods: The PBL-in-MI program was scheduled over six hours including a first tutorial group meeting, then personal work, followed by a second tutorial group meeting. A problem that simulates practice, focused on information technology, is discussed. 220 third year medical students enrolled with two differences between the two universities: (i) in Rouen, the students were familiar with PBL, while the Rennes students were first-ever participants in PBL courses; (ii) in Rouen, the students enrolled on a volontary basis while in Rennes, the program was included in the standard curriculum and thus mandatory. Each student was given a questionnaire in order to evaluate qualitatively the program.

Results: The overal opinion of the students enrolled in this program was good since 70% responded positively. The questionnaire was also used to evaluate PBL vs. academic courses: 80% found PBL satisfactory, whereas 44% found academic courses satisfactory. An evaluation of electronic documents vs. paper documents was performed. Although the students preference was given to paper documents, the difference was not significative.

Discussion: While participants judged the new program to be interesting, students in Rouen were significantly more enthusiastic. The attitudes and opinions of students were plausibly related to differences in previous PBL competency.

 

Applications of the polytomous IRT model for the Objective Structured Clinical Examination
Keywords: polytomous IRT, OSCE
Authors: Kim, Mee Young; Huh, Sun
Institution: Hallym University College of Medicine

Summary: Background: For the assessment of students, we used the Paper-and-pencil Test (PAPT) and Objective Structured Clinical Examination (OSCE) at the end of each semester of the clinical clerk ship. We examined the correlation between the scores of PAPT and OSCE, and that between the conventional score and the proficiency of OSCE.

Methods: June in 2003, the 3rd grade medical students (n=81) took the PAPT and 4 items of OSCE. For the OSCE, partial credit was permitted. We used the polytomous IRT (item response theory) model to estimate the proficiency of students and difficulty of items by BIGSTEPS (program for Rasch's formulation). We analyzed the correlation by dBSTAT, a statistical package.

Results: The difficulty parameters of the 4 OSCE items were 0.38, 0.21, 0.08 and -0.67, respectively. The correlation coefficients between PAPT and OSCE scores were –0.0327(p=0.7717) by the conventional score and –0.0102(p=0.9279) by the proficiency. The correlation coefficient between the conventional score and the proficiency was 0.9827(p=0.0000).

Conclusion: The conventional score was highly correlated with the proficiency by the BIGSTEPS. But, the correlation between the two OSCE scores and the PAPT score were not significant. The results seem to indicate that these two tests evaluated different domains. The polytomous IRT model was comparable to the classical test theory in the interpretation of the examinee's proficiency.

 

High inference characteristics and related low inference behaviours of clinical teachers, preferred by clinical teachers, trainees, students and patients
Keywords: High inference characteristics; low inference behaviours; clinical teaching
Authors: Chitsabesan P 1, 2, Corbett S1, 2, Walker LA 1, Spencer JA 2, and Barton JR 1, 2 1Northumbria Healthcare NHS Trust, 2University of Newcastle upon Tyne
Institution: Northumbria Healthcare and University of Newcastle upon Tyne

Summary: Background: Teacher evaluation tends to focus on complex constructs, such as feedback or enthusiasm, which require the observer to make an inference about the qualities of the teacher. By contrast, low-inference behaviours (1), such as length of utterances, or non-verbal behaviour, can be reliably measured or counted without recourse to inference. Feedback on specific low-inference behaviour pinpoints aspects of teaching that can be changed more easily (1). However, previous work has failed to demonstrate how high and low inference behaviour might be linked, or which low inference behaviours are associated with high quality teaching (2,3).

Aims: To identify characteristics and behaviour associated with good and bad clinical teachers and teaching episodes.

Methods: Hospital Consultants, middle grade doctors, and medical students described characteristics of good and bad clinical teachers using a repertory grid (RG), and clinical teaching episodes using critical incident technique (CIT). Patients were interviewed using CIT only. Three researchers using NVIVO software then coded the transcripts. The coding was compared to check agreement between raters and the credibility of the interpretation. The RG categories were used as attributes to examine the relationship between characteristics and specific teaching behaviour.

Results: High-inference teaching characteristics were associated with clusters of low-inference behaviours. A construct of importance for Interviewees was the quality of feedback. Interviewees quotes about low inference feedback behaviour are on the left (see table) with their views on the right.

Discussion: This novel method identifies clusters of behaviour that are linked to preferred teachers characteristics and experiences of good teaching.

Reference List:

1) Murray HG. Low-inference classroom teaching behaviors and student ratings of college teaching effectiveness. Journal of Educational Psychology 1991; 75(1): US.
2) Barber SG. Postgraduate teaching audit by peer review of videotape recordings. Medical Teacher 1992; 14(2/3): 149-157.
3) Booth MB. The quantitative and qualitative analysis of the teaching role of the registrar. Medical Teacher 1998; 20:43-45.

Table: IntervieweesÅf quotes on ÅefeedbackÅf and the related low inference behaviours

Corresponding author: Praminthra Chitsabesan, Teaching and Research Fellow, Room T95, Education Department, North Tyneside General Hospital, Rake Lane, Tyne & Wear, NE29 8NH

 

Mr
Keywords: Significant event, audit, general practice
Authors: Bowie, P. McKay, J. Norrie, J. Lough, M.
Institution: University of Glasgow

Summary: Title: The awareness and analysis of a significant event by general medical practitioners: educational implications 1

Objectives: To determine if general practitioners were aware of a recent significant event and whether a structured analysis of this event was undertaken to minimise the perceived risk of recurrence. The discussion of significant events, the fora used for analyses and primary care team involvement were also explored.

Method: Cross sectional postal questionnaire survey of general practice principals in Greater Glasgow.

Results: Four hundred and sixty-six GPs (76%) responded. GPs from single-handed practices were less likely to respond than GPs from multi-partner training and non-training practices. 401/466 GPs (86%) reported being aware of a recent significant

event, with unawareness clearly associated with GPs from non-training practices (P<0.001). Of this group, 219/401 (55%) indicated performing all the necessary stages of a structured analysis (what happened?, Why did it happen, What was learned?, What was changed?) of the recently identified significant event. Univariate and multivariate analyses showed that GPs from training practices were more likely to report participation in the structured analysis of the recent event (P<0.001) and also perceive the chance of this event recurring as 'nil' or 'very low'. Training practice respondents were more likely to report significant event discussions taking place at partners' meetings (P<0.001), team meetings (P<0.001) and dedicated audit meetings (P=0.06).

Conclusions: There is variation in the depth of and approach to significant event analysis, which suggests there are educational implications associated with the understanding and application of the technique. This may have implications for the effective use of SEA as part of the NHS quality agenda.

Reference: 1. Bowie P, McKay J, Norrie J, Lough M. Awareness and analysis of a significant event by general practitioners: a cross sectional survey. Quality & Safety in Health Care (In press)

 

Dr
Keywords: significant event, audit, general practice
Authors: McKay, J. Bowie, P. Dalgetty, E. Lough, M
Institution: University of Glasgow

Summary: Title: The educational peer assessment of significant event analyses in general medical practice

Objectives: To explore the motivational factors and experiences of general medical practitioners who submit SEA reports for educational peer assessment 1,2.

Methods: Two qualitative focus group sessions were conducted with a convenience sample of 21 general practitioner principals in Greater Glasgow who had submitted SEA for peer review.

Results: General practitioners cited the requirements of external accreditation bodies as one motivating factor in undertaking SEA. They believed that formally documenting SEA led to more structured, in-depth analyses being achieved than if SEA was undertaken informally. SEA was perceived as facilitating team-based learning, improving teamwork and the management of complaints, and was useful in alerting the team to problems. GPs described strong emotional connections, including guilt, blame and personal catharsis, with some significant events which made it difficult to notify colleagues for fear of exposure. SEA reports were submitted for peer review as part of their postgraduate education, but also for the perceived added value the process gave it and for reassurance by peers that it was done properly, which provided them with personal satisfaction. These reports were highly selective because of concerns about confidentiality, public exposure and possible litigation. GPs believed there was little value in analysing positive significant events and always prioritised problem events.

Conclusions: GPs believe that external review of SEA, including peer assessment, formalises the process and can increase the effectiveness of event analyses.

References: 1. McKay J, Bowie P, Lough M (2003). Evaluating significant event analyses: implementing change is a measure of success. Education for Primary Care, 14(1): 34-38

2. Bowie P, McKay J, Lough M (2003). Peer assessment of significant event analyses: being a trainer confers an advantage. Education for Primary Care, 14(3): 338-344

 

The development of a peer evaluation process at the Sackler School of Medicine, Tel Aviv University
Keywords: Peer-evaluation; Quality of instruction; Clinical curriculum; Formative evaluation
Authors: Tur-Kaspa, R.Abramovitz, R. Notzer, N.
Institution: Tel Aviv University

Summary: Peers are excellent source of data for evaluating the quality of instruction. The process is still experimental and its validity has not been fully established. This paper describes the peer-evaluation, carried out via collaboration of the committee for assessment, the departments' chairmen and the unit of Medical Education.

The objectives of the process are:

1) Monitoring clinical- instruction for small groups of students (5-6) carried out in 7 affiliated-hospitals in 98 clinical-sites.

2) Providing feedback in the midst of the clerkship enabling implementation of changes if needed (formative-evaluation) and,

3) Balancing the data from ongoing students' feedback questionnaires (summative-assessment).

Process: Each school department's chairman selects a committee of senior faculty. This committee sends sub-committees (2-3 physicians) to visit each clinical-site during the period of clerkship. The sub-committees meet the head of the clinical-site, tutors, physicians and students and look at materials and facilities. The visit concludes with suggestions to the clinical-site's head for immediate implementation. A full report based on a structured scaled questionnaire and processed by the unit of medical education is then completed by the sub-committee and presented to the curriculum committee, the academic departments' chairmen and clinical-sites' heads.

Conclusions: The process which was started 3 years ago is highly esteemed by faculty and students. It motivates clinical-sites to excel, exchange ideas among departments and improves the clinical curriculum. The pitfalls lie in the amount of time required and the reluctance of peers to grade collogues. The validity and reliability of this evaluation still need to be studied.

 

The effects of part-whole practice schedule on the acquisition of a complex bone plating surgical procedure
Keywords: orthopedics learning technical skills evaluation
Authors: Dubrowski, A.; Backstein, D.; Abughaduma, R.; Leidl,D.; Carnahan, H.
Institution: University of Toronto University of Waterloo

Summary: Introduction: Practicing surgical tasks in a laboratory-based teaching environment can be arranged as whole practice, where the entire task is taught, or as part practice, where the task is broken down into its fundamental components, and each component is taught separately before putting the whole procedure back together. These components can be practiced either in blocks, or randomly. The issue of the optimal practice schedule for the acquisition of the unique task of bone plating, by novice surgeons, is critical for enhancing current training programs.

Method: During a 3-hour acquisition phase, a bone-plating task was practiced as a whole (Whole group), or in parts arranged in either a random (Random group) or a blocked order (Blocked group). Performance was assessed on global ratings, checklists, and final product analysis before (Pre-test), immediately after (Post-test I) and one week after the practice session (Post-test II).

Results: Checklists, and final product analysis showed that the Whole group performed better than the Random group, which performed better than the Blocked group. Global rating scores were insensitive to the changes in performance as a function of practice schedule. There were no differences between the two Post-tests.

Conclusions: Participants improved their performance, and this improvement was retained for a week. Based on the present results it is recommended that surgical tasks composed of several discrete components, should be practiced as a whole. However, if part practice is necessary the components should be arranged in a random order to optimize task performance.

 

Standard Setting for Communication and Interpersonal Skills
Keywords: standard setting, standardized patients, communication skills, OSCE
Authors: Yudkowsky, R.; Downing S.
Institution: University of Illinois at Chicago COM, Dept of Medical Education

Summary: Purpose: To compare three methods of standard setting for a standardized patient (SP) based assessment of communication and interpersonal skills (CIS).

Method: 79 PGY2-3 residents took part in a six-station CIS assessment. SPs completed an 18-item rating scale including a global item asking if they would choose this resident as their personal physician. Items were rated on a five-point scale. Overall global and case scores were generated by averaging scores across all six cases. Standards were generated in three ways: "Global": a grade of "fail" was assigned to all residents whose overall global score was below 3.0 – i.e., residents whom patients would not choose as their physician. Contrasting Groups: the standard was set at the intersection of the overall case score distributions of residents with overall global scores of 1-3 and those with scores of 4-5. Borderline Group: the standard was set at the mean overall case score of all residents with an overall global score of 3-3.99, i.e. those residents the patients were not sure if they would choose.

Results: The three methods resulted in passing rates of 71%, 55%, and 50% respectively – see table.

Discussion: Contrasting group and borderline group methods resulted in unacceptably low pass rates. Interestingly, residents with "neutral" mean scores in the 3.00-3.99 range were generally rejected by patients. Physicians may need to actively connect with patients, rather than just avoid offense, in order to achieve an effective alliance.

 

Method
Passing Score Passing Rate
Global 
Overall Global Score= 3.0 71%
Contrasting Groups.
Overall Case Score = 3.97 55%
Borderline Group.
Overall Case Score = 4.03 50%

Refining physiotherapist examiner training through an evidence based training curriculum and process
Keywords: OSCE, examiner training
Authors: Cooper, M. Alison
Institution: Canadian Alliance of Physiotherapy Regulators

Summary: The Physiotherapy Competency Examination (PCE) is the entry-to-practice examination for physiotherapists in Canada. This two-part examination consists of a Written Component and a Clinical Component. The Clinical Component is a 16 station Objective Structured Clinical Examination (OSCE) that includes both clinical encounters and written (short answer) stations. Practicing physiotherapists are recruited as examiners for the clinical stations and markers for the short answer stations. Consistent and accurate performance of physiotherapist examiners contributes to the reliability of results from the Clinical Component. Examination day feedback and incident reports suggested that a review and refinement of physiotherapist examiner training could improve consistency of performance and examiner satisfaction. The adult education and rater training literature was reviewed for information on best practices in examiner training. Qualitative and quantitative data from four administrations of the Clinical Component was analyzed for trends and themes. A facilitated working group comprised of Chief Examiners, examiners, a psychometrician and administrative staff reviewed the literature and data analysis and developed recommendations for refinements to the examiner training materials and processes. The recommendations encompass the examiner training curriculum elements of recruitment, confirmation, pre-examination training, examination day orientation and training, written station marker training, dry run with standardized client, and post-examination feedback and quality assurance. The report includes some discussion of the resource and timing issues related to implementation of the recommendations.

 

Effects of Case-Based Learning on the Self Directed Learning Skills of Nursing Students in College of Health Sciences Keywords: Self directed learning, Nursing students, Case based learning
Authors: Eman Tawash and Usha Nayar*
Institution: College of Health Sciences, College of Medicine*, Arabian Gulf University, Kingdom of Bahrain

Summary: Purpose: To investigate the degree to which case-based learning (CBL) helped nursing students improve their self-directed learning (SDL) skills.

Methods: A questionnaire consisting of ten questions (adopted from Knowles [1975]) was administered to 300 nursing students (1st, 2nd, and 3rd years) at the College of Health Sciences, Kingdom of Bahrain. SPSS version 11.5 was used to analyze the data. Wilcoxon, Mann Whitney, and Kruskal Wallis statistical tests were also used for analysis.

Results: The results of the study were generally significant. Nursing students' SDL skills improved significantly as they advanced from the first to third years of study (p =0.002). Students perceived improvement in nine of ten skills investigated by the questionnaire. These skills were ability to: question and inquire for knowledge; accept other's ideas and points of view; scan data and choose related resources; collect data about their performance; evaluate their present performance using these data; express their learning needs as learning goals and activities; set goals to improve their present performance; observe others and use them as role models to improve; and make a commitment to work on their goals and maintain continuous self-motivation.

Conclusion: Nursing students perceived that they had improved their self directed learning skills after being exposed to the case based learning curriculum.

 

A structured oral examination for neonatal - perinatal medicine
Keywords: oral examination; evaluation
Authors: Jefferies, A.
Institution: Mount Sinai Hospital

Summary: A structured oral examination for neonatal – perinatal medicine. Ann Jefferies, Brian Simmons, Eugene Ng and Martin Skidmore. Dept. of Paediatrics, University of Toronto, Ontario, Canada

Purpose: As traditional oral examinations have low reliability, we evaluated a structured oral examination (SOE) in our neonatal – perinatal medicine subspecialty training program.

Methods: Thirteen candidates participated in a SOE, consisting of 4 1st year and 4 2nd year clinical scenarios with 2 – 7 standardized questions, expected responses and a predetermined marking scheme. For each scenario, 2 examiners assigned scores independently and completed a 7-point global rating to evaluate overall performance. SOE scores were compared with scores from an OSCE administered 6 months previously.

Results: Mean percentage score was 64 +/- 10 (sd) for 1st year candidates and 66 +/- 13 for 2nd year. Global ratings for 1st and 2nd years were similar (4.6 +/- 0.8 and 4.8 +/- 1.1) Correlation between scenario scores and global ratings was significant (r = 0.81, p < 0.001). Inter-station reliability for the global ratings was moderate (Cronbach's alpha 0.43 for 1st year and 0.53 for 2nd year). Inter-rater reliability was substantial (ICC > 0.61) for 65% of the scenarios). Correlations between SOE and OSCE scores and overall global ratings were significant (r=0.58, p=0.04 and r = 0.63, p = 0.02 respectively).

Conclusions: Reliability of the SOE was appropriate for a training program assessment tool. The SOE was well accepted by trainees and faculty and is a useful method of assessing subspecialty trainees.

 

The assessment doctor: pilot study of an instrument for the evaluation of theoretical examination papers in a medical programme
Keywords: evaluation of assessment, cognitive levels, alignment
Authors: Wasserman, E. and Burch, V.
Institution: University of Stellenbosch, Republic of South Africa

Summary: An instrument was developed to evaluate the educational alignment of assessment events occurring in the context of theoretical modules in a medical curriculum. The instrument incorporates the following measurements: an attempt was made to match each assessment event with a stated outcome of the written curriculum. Thereafter, the strength of this match was evaluated according to the degree of overlap between the knowledge domain of the assessment event and the learning outcome. Finally, the cognitive levels of both the assessment event and the learning outcome were determined using a set of action words or phrases, based on Bloom taxonomy, but adapted to the medical environment. Processed data are visually represented as graphs to illustrate possible gaps and overlaps in the assessment of learning outcomes (demonstrating aspects of the hidden curriculum), as well as the degree of cognitive alignment between the stated and assessed curriculum. This pilot study applies the instrument to three different examination papers used in 2002 for assessment in the MB.ChB. program at the University of Stellenbosch, Republic of South Africa. Both an internal and an external evaluator applied the instrument, and consensus will be statistically analysed. The implications of this instrument for the quality assurance of assessment in this setting will be illustrated. *We acknowledge the support of the Federation for the Advancement of Medical Education and Research (FAIMER) for this work.

 

Using standardized patients to assess the communication and interpersonal skills of physicians: Six years experience with a high stakes certification examination
Keywords: Standardized patients, Communication skills assessment
Authors: van Zanten, Marta; Boulet, John R.; McKinley, Danette W.
Institution: ECFMG

Summary: Communication and interpersonal skills are essential elements of a physician's clinical competence. The ability to interview effectively, counsel appropriately and establish caring relationships with patients are integral parts of being a successful health care provider. Since 1998, the communication and interpersonal skills of over 43,000 physicians have been assessed as part of the Educational Commission for Foreign Medical Graduate's (ECFMG's®) Clinical Skills Assessment (CSA®). Standardized patients (SPs) were used to assess interpersonal skills along four dimensions (skills in interviewing and collecting information, skills in counseling and delivering information, rapport and personal manner). The content of the rating scale, the development and implementation of training materials and procedures, the psychometric characteristics of the measure, the characteristics of poor performers, and the quality assurance protocols are described. Data from over 400,000 patient encounters were analyzed to evaluate the psychometric properties of SP ratings and the underlying structure of the measure. Generalizability analyses showed that the ratings were reproducible over encounters (2>0.80). Correlations with other measures supported the construct validity of the assessment. Overall, the findings indicate that SPs, given that they are adequately trained, can provide accurate and defensible ratings of physicians' interpersonal and communication skills.

 

A Predictive Validity Study Of Mcat And Undergraduate Gpa Employing The Medical Council Of Canada's Licensing Examination
Keywords: MCAT, Predictive Validity, Medical School Admissions
Authors: Donnon, T; Violato, C; Lemay, J-F; Jones, A
Institution: University of Calgary

Summary: The main criteria for selecting candidates to undergraduate medical education programs have been previous measures of academic and cognitive performance (i.e., undergraduate GPAs and MCAT scores). With increasing concerns about the validity of medical school admissions processes, issues about the predictive validity of both cognitive and non-cognitive (e.g., admission interview) variables have been raised. In the present study, a stepwise multiple regression analysis of 543 (female = 264, 48.6%; male = 279, 51.4%) admitted candidates' (matriculating years 1994 - 2001) cognitive independent variables (i.e., undergraduate GPA, MCAT subscale scores) accounted for only 24% of the variance in determining their Medical Council of Canada's Qualifying Examination Part I final scores (Multiple R = .493). In this model, students' undergraduate GPA accounted for the most variance at 12% with an additional increase of 10% for the MCAT Verbal and 2% for the MCAT Biological subscales. Other admission measures (i.e., years of education, mean ranking on interview, and MCAT Written and Physical subscales) did not make significant contributions in the regression equation (p = ns). Neither the MCAT nor undergraduate GPA, therefore, are strong predictors of the Medical Council of Canada's medical licensing Qualifying Examination I performance.

 

Building a Validity Argument for a Medical Licensing Examination
Keywords: validity, testing, licensure
Authors: Dillon, G., Clauser, B., Hawkins, R., and Swanson, D.
Institution: National Board of Medical Examiners

Summary: Building a Validity Argument for a Medical Licensing Examination Gerard F. Dillon, Brian E. Clauser, Richard E. Hawkins, and David B. Swanson National Board of Medical Examiners, USA In educational and psychological testing, validity refers to the degree to which evidence and theory support the interpretation of test scores, and validation is the process of accumulating such evidence. This can be particularly challenging for a large-scale examination program like the United States Medical Licensing Examination (USMLE), which is intended to inform the decision made by the U.S. medical licensing authorities. In essence, that decision relates to whether a physician is ready to assume independent responsibility for the delivery of safe and effective patient care. It has been suggested that test developers and score users think of the interpretation of examination results as requiring a series of inferences, which begin at the point of blueprint definition and end with the ultimate licensure decision. Thought of in this way, it is possible to identify, organize, and prioritize validation efforts around those inferences that appear to be least well supported. The purpose of this paper is to describe the series of inferences that underlie the interpretation of USMLE results, to suggest points in the series that might require the highest priority; and to provide examples of analysis and research that address these areas. Particular attention will be given to how the challenges to validation change as a function of adopting more authentic assessment formats.

 

Pre-medical achievement as predictor for success in medical school
Keywords: Achievement, preclinical, clinical
Authors: Wimmers, P.; Schmidt, H. G.
Institution: Erasmus MC - University Medical Center Rotterdam

Summary: Students of medical schools in the Netherlands are admitted by weighted selection. However, since the academic year 2000/2001 medical schools started to use selection criteria. This study is conducted to investigate if pre-medical achievement could be a predictor for success in medical school. Medical school is often designed according a two-stage model with a sharp distinction between preclinical and clinical education. Therefore, it is interesting of pre-medical achievement is an equal predictor for both stages. The data was selected from a cohort who completed medical school successfully. Results revealed that the correlation between pre-medical achievement and preclinical achievement is relatively higher (r = .488, p<.000) in comparison with the correlation between pre-medical achievement and clinical achievement (r = .194, p<.005). The correlation between preclinical and clinical achievement is (r = .294, p<.000). The amount of variance in preclinical and clinical achievement explained by pre-medical achievement are 24 and 4 percent, respectively. The relation between study duration and achievement are -.213 and -.202 (p<.005), respectively for preclinical and clinical education. In conclusion, pre-medical achievement is a good indicator for achievement in the first preclinical years of medical education. The study duration of students in medical school is an indicator for their achievement. On the other hand, the relation between the two stages within medical education is low; as a result the predictive value of pre-medical achievement is also low for clinical achievement. It seems that other aspects are important during preclinical years in comparison with clinical years.

 

The Patient's Story: Assessing Student Essays on Personal Experiences in Authentic Environments
Keywords: Assessment, Narrative Medicine, Medical Student Essay
Authors: Robert J. Bulik, Ph.D.; Donna B. Weaver, M.D.; Joan Hanor, Ph.D.; Debra Newell, Ph.D.; Sherry Wulff
Institution: University of Texas Medical Branch, Galveston, TX

Summary: The Family Home Visit program, a community-based experience for first year medical students, is designed to improve interviewing skills in non-clinical settings. One of the course requirements, the Patient's Story essay, is written by students after visiting multi-generational families in their homes and reflecting on the experiences (the "story") of the individuals interviewed. Evaluations of these kinds of experiences are difficult and overwhelming for any one individual faculty member, and the assessment literature on this topic is sparse. Consequently, we designed a scoring rubric in which multiple faculty graders could read student essays and assign a grade. This proposal describes an innovative approach to assessing student essays within the domain of narrative medicine. Seven different faculty raters shared the responsibility for scoring 310 student essays (blinded) over the course of two years. Three scoring categories – low, moderate, and high, with point values corresponding to the percentage weight, were established for each of five essay elements, corresponding to the characteristics of a short story: introduction, character, setting, plot, and integration (application of this experience to future practice). Ten essays in each of the two years of this project were scored by all raters to establish an initial agreement coefficient (p = 87.5). As a check on reliability, rater agreement coefficients on the five elements of the essay were determined and ranged from p = 84.6 to p = 90.2. Raw scores were equated to s-scores and final grades established, yielding a good distribution: 43% = pass; 37% high pass; 19% honors.

 

The performance of International Medical Graduates on the Medical Council of Canada Part II Clinical Performance Examination
Keywords: licensure examination, clinical performance examination, international medical graduate
Authors: Birtwhistle, R., Blackmore, D., Touchie, C., Smee, S., Humphrey-Murto, S., Wood, T.
Institution: Medical Council of Canada

Summary: The Medical Council of Canada (MCC) administers an Objective Structured Clinical Exam (OSCE) known as the MCC Qualifying Examination Part II (MCCQE Part II) that leads to the Licentiate of the MCC (LMCC). The LMCC is a qualification that is used as a prerequisite to licensure in Canada. The MCCQE Part II has been administered in Canada since 1992 to about 2300 candidates per year. Each examination cohort is made up of International Medical Graduates (IMG) and Canadian Medical Graduates (CMG). Many IMGs do very well on the MCC examinations, but others do not. This study looks at the variability of IMG performance on the MCCQE Part II , identifying content issues as well as comparing the performance of IMGs who have trained exclusively outside of Canada and those who have done some residency training in Canada. The performance measures that will be reported include communication skills, data acquisition skills, problem solving skills, and knowledge of the legal, ethical and practice organization aspects of the practice of medicine in Canada.

 

Using Standardized Patients to Assess the Clinical Performance of Entering Family Practice Residents
Keywords: Medical Education, assessment, family practice, residents, clinical competencies
Authors: Nieman, L., Moreno, C.; Gladu, R.; Cheng, L.; Dumas, C.
Institution: University of Texas Health Sciences Center at Houston, Medical School

Summary: Using Standardized Patients to Assess the Clinical Performance of Entering Family Practice Residents. Abstract: Purpose: Entry level family medicine residents come from diverse educational backgrounds. The dual purposes of this evaluation study were to describe the entry Objective Standardized Clinical Exercise (OSCE) as a method of assessing basic clinical skills early in a family medicine residency and to study the relationship between the OSCE and other standardized measures of performance.

Methods: Thirty-four first-year residents participated in the OSCE. Correlation studies were used to compare United States Medical Licensing Exam (USMLE) Step 1, Step 2, and in-training examination scores with OSCE scores. Residents rated their perceptions of the OSCE after their regular rotations began.

Results: The residents did not perform as well in demonstrating skills of physical examination (42.1%) as they did in demonstrating skills of patient-doctor interaction (82.9%) and history taking (63.8%) (both Ps<0.01). Significant variability was found among residents on the total OSCE and subscales (all P<0.05). No difference was found between graduates of United States (US) and foreign medical schools on the overall OSCE scores (P>0.05), but differences appeared in history taking scores. (P<0.05). The total OSCE score significantly correlated with the score of the USMLE Step 1 (P<0.05), while the history taking score correlated significantly with the USMLE Step 2 (P<0.01). Most residents (72.3%) either agreed or strongly agreed that the OSCE offered a fair assessment of their clinical skills.

Conclusion: Physical examination skills need greater emphasis in the residency curriculum. Knowing specific strengths and weaknesses of residents may allow faculty to provide more resident-specific precepting. Results suggest that the OSCE may be a helpful routine measure for entering residents.

 

Problem finding ability among students in problem-based learning (PBL) tutorials: an assessment by a computer system
Keywords: Problem-based learning, Assessment, Computer-based testing
Authors: Suganuma T, Tang AC, Osawa M, Taakakuwa Y, Yoshioka, T.
Institution: Departments of Medical Education, Pediatrics and Biochemistry, Tokyo Women's Medical University, School of Medicine

Summary: PBL tutorials in the first four years of the 6-year medical course at our university are combined with lectures and practicals as a part of organ and functional-based integrated curriculum units. First year students, mostly high-school graduates, are often confronted with difficulties finding learning objectives or problems from a given case. The objective of the study was to assess problem finding ability of these students using a newly developed computer-based assessment system. First (n=79) and second (n=77) year medical school students in 2003 underwent the assessment and the results were compared. Students of both years were given a short case with context that has not been formally learned in their regular curriculum. The students extracted problems from the case and categorized them into one of the 12 medical categories on their computer. Data were collected at a host computer. The total number of problem extracted was counted and each problem was sorted into having superficial keywords (words present in the given case) or having profound keywords (medical words not present in the given case). The results showed that the second year medical students found a significantly greater number of profound keywords (p<0.01) in their extracted problems. There was no difference in the number of superficial keywords between the two groups. The results indicated that the second year students who had longer experience in PBL and medical education were capable of finding more self-initiated exploratory type problems in various categories instead of simple reproductive type problems from a case.

 

Clinical Skills assessment in Catalonia. A professional accreditation strategy. 1993-2004
Keywords: assessment, professionals, accreditation
Authors: Josep Maria Martínez-Carretero, Josep Arnau-Figueras, Carles Blay, Eduardo Kronfly, Montserrat Solà, Lluís Gràcia, Ramon Descarrega, Nieves Barragán.
Institution: Institute of health studies

Summary: The Institute of Health Studies (IHS), an agency of the Department of Health and Social Security of the Autonomous Government of Catalonia, started several projects at undergraduate, postgraduate and independent practice levels of medical education, eleven years ago, in order to introduce CSA (clinical skills assessment) methods in Catalonia, as educational as well as assessment tools. From 1993 till December 2003, the IHS has carried out formative and summative assessment projects in collaboration with Medical and Nursing Schools and with various Scientific Societies. The number of formative and assessment projects during this period of time has been as follows: Final year medical and nursing students ( 88 projects and 5870 students evaluated), professionals of different specialties ( 35 projects and 4125 professionals evaluated). The final objective is to introduce the concepts of accreditation and reaccreditation, for recruitment and professional promotion purposes of health professionals in the Catalan health service. The IHS CSA projects developed during the last 11 years have demonstrated their reliability, feasibility and acceptability in our context. Performance-based assessment is increasingly respected by students, professionals and institutions as an acceptable and fair method of evaluation. The current and future objectives are to develop a Catalan assessment board for health related professions as an independent Agency capable of ensuring the development and quality of the evaluation projects

 

A comparison of mature and non-mature medical students' transitions into the clinical environment
Keywords: Transition; Curriculum; Mature student; Early experience; Professionalism; Medical school
selection
Authors: Shacklady, J., Mason, G., Davies, I., Smithson, S., Dornan, T.
Institution: University of Manchester

Summary: Aim: To compare the transition from the pre-clinical to the full-time clinical environment in terms of age at entry to the MB programme.

Method: The University of Manchester has a horizontally-integrated, problem-based curriculum that offers little clinical exposure in its first two years. 11 weeks after entering the clinical environment, all 111 Y3 students at one teaching hospital were asked to rate on a 7-point Likert scale how well their experiences both inside and outside medical school had prepared them for Y3, and briefly describe their experience of transition. Textual responses were rated independently by two researchers as positive, negative, or neutral. Using a retrospective case-control design, each of 15 mature students (age over 21 at entry to MB programme) was matched to two non-mature students for gender, month of birthday, place of phase 1 study, and year of entry to MB programme. Data are presented as median (interquartile range), and compared using the Mann-Whitney U-test.

Results: 92 students (83%) gave valid responses. Mature students were more likely to agree that their experiences both inside and outside medical school had prepared them for transition (Inside: 5.0 (1.0) vs 3.5 (2.0); p<0.05. Outside: 6.0(3.0) vs 4.0(2.0); p<0.05), and more likely to describe their transition positively (p<0.05). Summary and conclusion: Mature students drew on both the official curriculum and their wider life experiences to have a smoother transition into the clinical environment than non-mature students. These observations have implications to medical school selection, curriculum design and student support at the time of transition.

 

An examinee-centered approach to setting passing scores for standardized patient assessments
Keywords: standard setting, standardized patients
Authors: McKinley, D., Boulet, J., Hambleton, R.
Institution: ECFMG, University of Massachusetts at Amherst

Summary: Standardized patient examinations are being used under high stakes conditions (e.g., graduation, licensure, certification) with growing frequency. Concurrently, research on methods to determine the passing score for these types of performance-based assessments has increased. A wide variety of approaches have been considered in the past five years, with varying results. Methods that center on the review of examination materials (e.g., Angoff, Ebel) have been studied, with less than optimal results (Boulet, De Champlain, & McKinley, 2003; Smee & Blackmore, 2001). More and more, methods that center on review of examinee performance have been attempted. Various techniques have resulted in the establishment of defensible, reproducible standards. The purpose of this paper is to review the features of various examinee-centered standard setting methods that could be used for standardized patient examinations. The description of an approach to set passing scores that involves expert review of proxies for actual examinee performance will be presented. Application and evaluation of the method are illustrated using examination materials obtained high-stakes standardized patient examination. Consistency amongst the panelists was assessed through the use of generalizability analysis. Various regression models were used to derive passing scores. Results from this investigation and other studies will be presented.

References: 1. Boulet, J.R., De Champlain, A.F. & McKinley, D.W. (2003). Setting defensible performance standards on OSCEs and standardized patient examinations. Medical Teacher, 25, 245-249.

2. Smee, S.M., & Blackmore, D.E. (2001). Setting standards for an objective structured clinical examination: the borderline group method gains ground on Angoff. Medical Education, 35, 1009-1010.

 

Criterion Audit – Dilemmas in teaching and assessment
Keywords: criterion audit, general practice trainers, assessment
Authors: Murphy, D., Lough, M.
Institution: NHS Education for Scotland

Summary: This study(1) assessed trainers' ability to identify criterion audit cycle projects judged to be below a standard acceptable for summative assessment by trained audit markers. Confidence with teaching audit, as well as an ability to assess an audit cycle using the summative assessment eight criteria marking schedule was compared before and after a workshop designed to offer guidance on these issues. The impact of a trainer submitting a completed audit cycle by the same peer review system as registrars was considered. All 57 trainers in Greater Glasgow Primary Care Trust area were invited to be involved in the study and in preparation, planning, and assessment of the study design. Significantly better skills in identifying unsatisfactory summative assessment audit projects were demonstrated in trainers who had themselves submitted an audit project for peer review. Following a workshop aimed at enhancing skills there was no difference between trainers who had and had not submitted an audit project for peer review. There was a significant improvement in confidence in both the teaching of audit and in the use of the eight criteria marking schedule after the workshop. The results of this study suggest that a requirement for trainers to submit a completed audit cycle for peer review by the same method as their general practice registrars (GPRs) may improve skill in the teaching of criterion audit methodology. We believe ongoing participation in the audit process is likely to offer durability of learned skills.

1. Murphy D, Lough M. Criterion audit: dilemmas in teaching and assessment. Education for Primary Care (In Press)

 

Beginning to understand family physicians' reactions to MSF assessment: Perceptions of credibility and usefulness
Keywords: physician assessment, 360-degree, MSF, perceptions
Authors: Sargeant, J., Mann K, Ferrier S
Institution: Dalhousie University, halifax, NS, Can

Summary: Introduction: Multi-source feedback (MSF), or 360-degree feedback, is a questionnaire-based performance assessment process. It is a reliable and practical means of assessing physicians. However, little has been reported about physicians' perceptions of the MSF process. The purpose of this small study was to begin to explore family physicians' perceptions of their experiences with MSF and of their feedback , and factors influencing these perceptions.

Design and methods: This is a qualitative study using focus groups for data collection. Fifteen family physicians who had participated in an MSF pilot study, participated in three focus groups. We used 2 analytical processes - content analysis followed by constant comparison.

Results: Although participants generally agreed with patient feedback, reactions to feedback from medical colleagues and coworkers ranged from positive to negative. Two factors influenced these reactions – perceptions of feedback accuracy and credibility, and of usefulness of their feedback. Perceptions of accuracy and credibility, in turn, were influenced by recruiting unbiased yet informed reviewers, ability of reviewers to observe performance, and the evaluation tool. Perceptions of accuracy and credibility also affected opinions of usefulness of the feedback, as did the specificity of the feedback.

Conclusions: This small study suggests that family physicians' reactions to the MSF process are influenced by perceptions of credibility, accuracy and usefulness of feedback. Areas for further study include: selection of reviewers' able to make objective assessments, guidelines to enhance reviewer objectivity, effective provision of feedback, and understanding implications of negative and emotional responses to feedback.

 

When examiners know candidates - does this influence OSCE scores?
Keywords: OSCA; evaluation; examiners
Authors. Jefferies, A.
Institution: Mount Sinai Hospital

Summary: When examiners know candidates – does this influence OSCE scores? Ann Jefferies, Brian Simmons and Glenn Regehr. Dept. of Paediatrics and Wilson Centre for Research in Education, University of Toronto, Toronto, Ontario, Canada Although examiners contribute to variability of OSCE scores, specific examiner factors that influence OSCE scores are uncertain.

Objective: To ascertain whether examiners' familiarity with candidates influences OSCE scores.

Methods: Twenty-four trainees from 4 Ontario neonatal-perinatal medicine training programs participated in a 10-station OSCE. Twelve faculty examiners completed a checklist and 7-point overall global rating for each station. Scores were converted to percentages. Sixteen candidates and 7 examiners were from one site (A) and 8 trainees and 5 examiners were from the 3 other sites (non-A).

Results: Cronbach's alpha was 0.80 for the checklist and 0.88 for the global rating. Mean checklist and global scores were 67 9 (sd) and 14 respectively. Checklist scores awarded by Site A examiners were significantly higher than those awarded by other examiners (73 7 vs 58 13, p<0.001) but there was no significant difference in global scores (70 9 vs 65 14). There was no significant interaction between candidate site and examiner site (p = 0.12 for checklist and p = 0.21 for global ratings, ANOVA).

Conclusions: These data failed to show significant differences in scores generated by examiners whether or not the examiners knew the candidates. This is an important step towards affirming the objective nature of the OSCE but confirmation with a larger sample size is required.

 

Do students learn during oral exams?
Keywords: evaluation, learning
Authors: Centeno, Angel, Primogerio, cecilia.
Institution: Facultad de Ciencias Biomédicas, Universidad Austral

Summary: In contrast to formative evaluations, summative evaluations are seldom regarded as a learning opportunity.

Objective: to know whether students learned anything during an oral exam, what they learned, and how was the learning process initiated.

Methods: We conducted semi-structured interviews to 47 medical students from 2nd to 4th year sitting for final oral exams in 4 different disciplines (pathology, microbiology, pharmacology, and urology), asking if they expected to learn anything, if they thought they had learned anything, what and how. Interviews were conducted immediately after the exam.

Results: Forty-five students perceived they had actually learned, mainly contents, how to perform during an exam, and how to integrate their knowledge. They mentioned that whatever is learned during an exam is not forgotten. Although 90% did not previously expect to learn anything, they recognized that this instance was a good opportunity to learn. Most of the times the professor induced learning by adding contents, correcting the student, answering high order questions, or explaining themes, as in any other teaching activity. Learning occurred independently of the professor who was in charge of the exam.

Conclusions: Summative final oral exams are a good learning opportunity for contents, exam techniques and knowledge integration. Faculties play a decisive role in promoting learning during the evaluation. Faculties should recognize that they can use the evaluation time as a teaching instance too, and actively promote learning in this setting.

 

FAIMER: Evaluation of an International Leadership Fellowship in Medical Education
Keywords: medical education leadership programs; evaluation
Authors: Kalishman, S.; Burdick, W.; Morahan, P.; Mennin, S.; Eklund, M.
Institution: University of New Mexico School of Medicine

Summary: The Foundation for Advancement of International Medical Education and Research (FAIMER) Institute selects international, mid-career medical education leaders to enhance their skills in leadership and medical education, as well as to help participants develop strong professional bonds with other international medical educators. Fellows participate in two multi-week Institute sessions in consecutive years, followed by a second year of mentoring a new fellow. Each fellow develops and implements a curriculum innovation project that is intended to serve as a focus for their professional development during the fellowship. During the intersession period, a series of on-line discussions are held using a listserv, facilitated by faculty and participants, with contributions from a variety of content experts. A comprehensive plan to evaluate the effectiveness of the FAIMER Institute was developed. The plan incorporates cross sectional and longitudinal data and is focused primarily on individual participants as the source of data. Surveys, interviews, document analysis, as well as tools for professional network analysis and listserv analysis address elements of the theoretical foundation of the FAIMER Institute program. These elements include:

• The Institute and its impact on fellows
• Fellows' projects
• Fellows' evolution within medical education at their institutions
• Network development and collaboration among fellows in medical education
• Regional networks and collaboration by fellows in medical education

The evaluation plan, evaluation instruments, and early findings from the first year report will be included in this paper.

 

Physicians and care of quality for minority communities
Keywords: Social accountability, community-based education, primary health care, minorities
Authors: Grand'Maison, P.
Institution: Univ. of Sherbrooke

Summary:
Authors: Grand'Maison Paul, Schofield Aurel, Roy Jean, Bonin Brigitte, François José, Ouellette Dorothée. Medical schools are called upon to demonstrate how they fill their obligation of being socially accountable. Training competent physicians and supporting the delivery of high quality of care to meet the needs of populations they serve are significant actions to attain this objective. Populations that are part of minorities are frequently shown as having lower health status, lower access to and lower quality of health care, usually due to lack of resources and language barriers. In this context, the Association of Canadian Medical Colleges (ACMC), in partnership with stakeholders (medical schools, governments, health professionals, health administrators and communities), has recently launched a project focussing on the Francophone minority communities of Canada. These communities represent less than 10% of the population and are scattered throughout the country. The project has three objectives: encourage students coming from Francophone minority communities to consider, during their training and for their future practice, the health needs of these communities by completing training rotations in these; sustain the educational quality of training sites for these students; help these sites to adopt innovative approaches of health care services that could serve as models for students. The project underlying principles, its activities and anticipated results will be discussed. The ongoing system implemented to assess the project processes and results will be presented. SANTÉ CANADA, Imputabilité sociale Une vision pour les facultés de médecine du Canada, Ottawa, 2001 HABBICK BF, LEEDER SR. Orienting Medical Education to Community Need: a review. Medical Education 1996; 30 :163-171.

 

Comparing two standard setting methods for OSCE
Keywords: standard setting, clinical skill assessment
Authors: Dr Hirotaka Onishi, Dr Cheong Lieng Teng, Dr Francis Yeng Boon Pin, Dr Ramesh Jutti
Institution: International Medical University

Summary: Introduction: To implement outcome-based curriculum, use of criterion-referenced measurement will be a key. Although a few standard setting methods have been compared, no consensus is available.

Methods: In International Medical University (Malaysia), 51 fourth-year medical students took OSCE in August, 2003. Two standard setting methods, modified Angoff (MA) and Borderline regression (BR) were used. For MA, 10 faculty independently predicted the standard of each item of 16 stations before the OSCE and modified it after they saw the actual performances and average scores of all the stations. For BR, linear regression analysis was conducted to predict the OSCE score by four-point global rating (fail, borderline, pass, and distinction). The ranges of 95% confidence intervals (CI) of the standards from these two methods were compared.

Results: The point estimate of MA (56.0%) was more lenient than that of BR (53.4%). The range of 95% CI of MA (49.1-63.0%) was far wider than that of BR (51.0-55.8%). Many faculty involved felt MA too faculty-intensive.

Discussion: If the OSCE intends to assess core content of clinical skills, both standards would be regarded as too low. BR was less faculty-intensive method to set standard and showed narrower CI.

 

Is generalisability theory useful to improve an OSCE?
Keywords: Reliability, Clinical Skill Assessment
Authors: Dr Hirotaka Onishi
Institution: International Medical University

Summary: Introduction: Generalisability theory gives additional information on assessment tools such as an OSCE. The objective of this study is to optimize the number of stations and examiners in each station of the OSCE in International Medical University.

Methods: In August, 2003, 51 fourth-year students took 16-station OSCE. In each station two examiners assessed each student independently. Three factors of students, stations, and examiners (nested in station) were analyzed by G-study. D-study was conducted to see the changes of generalisability indices (GI) related to different numbers of stations and examiners.

Results: GI of the OSCE was 0.72. Variance mainly derives from stations (44.2%) and the interaction between students and stations (31.6%). Examiners did not contribute to the total variance much (2.4%), but the interaction between students and examiners cannot be overlooked (15.4%). If the number of stations was 8 or 12, GI would be 0.57 or 0.66. If one examiner was involved in each station, GI would stay at 0.69.

Discussion: To make the OSCE generalisable, the number of stations was the most contributing factor. Interrater variability did not contribute to the total variance.

 

The use of student feedback to fine-tune a portfolio assessment process
Keywords: portfolio-assessment-evaluation
Authors: Ponnamperuma, G.G., M.H. Davis
Institution: University of Dunee, Scotland

Summary: The use of student feedback to fine-tune a portfolio assessment process G.G. Ponnamperuma M. H. Davis

Introduction: The Dundee Medical School, Scotland, UK adopted portfolio assessment as its final year summative student exam five years ago. Students build their portfolios during the years 4 and 5 – the penultimate and final years. This study traces the influence of student feedback on the evolution of the portfolio assessment process.

Methodology: A questionnaire was administered to students after each portfolio assessment. The questionnaire required the students to score statements on a Likert scale and answer free text questions.

Results: Response rates were: 1999 – 83%; 2000 – 70%; 2002 – 89%; 2003 – 88%. Year on year student feedback will be presented with the changes that the medical school made in response. Student attitudes towards the assessment process improved with each change made. Improvements to the process are continuing, e.g. standardization of examiner questions.

Conclusion: Fine-tuning assessment according to student feedback has resulted in improved student attitudes towards the portfolio assessment process. This study confirms that innovations need to be changed in response to feedback (Kantrowitz et al., 1987).

Reference: Kantrowitz, M., Kaufman, A., Mennin, S., Fulop, T. & Guilbert, J.J. (Eds) (1987). Innovative tracks in established institutions for the education of health personal, in: WHO Offset Publication, No. 101. (Geneva, WHO).

 

Development and validation of an instrument to evaluate the quality of clinical teaching
Keywords: medical education, clinical teaching, evaluation
Authors: Marcela Bitran, Paola Viviani, Beltrán Mena and Rodrigo Moreno
Institution: Pontificia Universidad Catolica de Chile

Summary: Background: Improvement of medical education requires valid and reliable instruments to evaluate the quality of clinical teaching. This evaluation is necessary to provide specific feedback to teachers and objective criteria to school officers for recognizing teaching achievements.

Aim: To design and validate an instrument to evaluate the quality of clinical teaching of individual faculty members, suited for Spanish-speaking medical schools.

Methods: Based on the Stanford University's educational categories for medical teaching1, a 40-item questionnaire was initially devised for the evaluation of clinical teaching by students. After an iterative process that considered the feedback of teachers and students, and the results of a factorial analysis we assembled a 30-item questionnaire, named PUC30, which we present here.

Results: PUC3o contains 30 phrases that describe specific behaviors considered necessary for an effective clinical teaching. These behaviors relate to 1) use of patient-based teaching, 2) communication of goals, 3) evaluation, 4) promotion of understanding and retention, 5) promotion of self-directed learning, 6) session control, 7) feedback and 8) learning climate. Using a 4-item Lickert-type scale, the student is asked to rate the (observed) frequency with which the teacher displays these behaviors. From August to December 2003, 332 medical students from 4th to 7th year evaluated 63 faculty members using PUC30. Preliminary analysis of the data collected (942 evaluations) shows promising results.

1. "Factorial validation of a widely disseminated educational framework for evaluating clinical teachers" Litzelman et al. Acad. Med. 73: 688-95, 1998.

 

Improving the psychometric characteristics of tutorial-based assessments
Keywords: Tutorial-based assessment
Authors: Kevin W. Eva, Patty Solomon, Alan J. Neville, Michael Ladouceur, Karyn Kaufman, AllynWalsh, Geoffrey R. Norman
Institution: McMaster University

Summary: Introduction: Tutorial-based assessment, despite providing a good match with the educational philosophy adopted by educational programmes that emphasize small group learning, remains one of the greatest challenges for educators working in this context. The current study was performed in an attempt to improve the psychometric characteristics of tutorial-based evaluation by adopting a multiple biopsy approach that requires minimal recording of observations.

Method: After reviewing the literature, a simple 3-item evaluation form was created. The items were "Professional Behaviour," "Contribution to Group Process," and "Contribution to Group Content." Explicit definition of these items was provided on an evaluation form to 25 tutors in six different programmes. Tutors were asked to use the form to evaluate their students (N = 169) after every tutorial over the course of an academic unit. Each item was rated using a 10-point scale.

Results: Cronbach's alpha revealed an internal consistency greater than 0.7 in all six programmes. Test-retest reliability was greater than 0.8 in all programmes. The validity of the tool was supported by the observation of increasing ratings over the course of the academic unit and by the finding that more senior students received higher ratings than more junior students.

Conclusion: Consistent with the context specificity phenomenon, the adoption of a minimal observations often approach to tutorial-based assessment appears to maintain better psychometric characteristics than do attempts to assess tutorial performance using more comprehensive measurement tools. Further validity testing is underway.

 

The changing landscape of U.S. osteopathic undergraduate medical education
Keywords: program evaluation
Authors. Shen, L.;Meoli, F. G.
Institution: National Board of Osteopathic Medical Examiners

Summary: One distinguishing feature of U.S. osteopathic undergraduate medical education is that the majority of osteopathic medical schools are private schools as opposed to being state supported (public) institutions. Academic differences between the two types of schools have naturally been of interest to educators and researchers. Ten years ago, a study observed that the academic performance of students was significantly different between the two sectors. The purpose of this study was to re-examine the differences in student academic performance ten years later. As ten years ago, the outcome measure of this comparison study was the scores of a recent osteopathic licensing examination produced by the National Board of Osteopathic Medical Examiners. In this hierarchical linear modeling analysis, a dichotomous variable SECTOR (private vs public) was a predictor in the level 2 model. School mean MCAT, mean percentage of minority students, and mean acceptance ratio (number of acceptance over number of application) were also explored in the level 2 model as the predictors of academic performance. The results indicate that the school mean MCAT was the only significant school level predictor. The differences in academic performance between the private and public schools as observed ten years ago no longer exist. The results delivered the message that the academic diversity in osteopathic education is no longer determined by the sector of the schools. Although the actual dynamics of this change remain a stimulating research agenda, the overall landscape of osteopathic undergraduate medical education has changed significantly in the past ten years.

 

Improving the Inter-item Correlations on a Structured Oral Certification Examination
Keywords: structured oral, key feature, inter-itm correlation
Authors: Allen, T. Lang E, Chauny JM, Blouin D, Smith W, Dandavino A.
Institution: Laval University, The Collège de médecins du Québec and the Centre d'évaluation des sciences de la santé de l'Université Laval, Québec, Canada

Summary: Performance-based examinations usually show low inter-case correlations (0.1 to 0.2), or content specificity, which increases the number of test cases needed to achieve reliable results. We hypothesized that cases designed and structured specifically to test clinical reasoning skills, as opposed to the required knowledge itself, would show higher inter-case correlations. This hypothesis was tested during the development and administration of part of a new certification examination in emergency medicine for the Collège de médecins du Quebec, from 2001-2003. Cases and case-specific criterion-based scoring grids were developed using a key feature approach. This approach identifies the essential or critical steps in the resolution of a clinical problem and testing concentrates on these aspects. Key features are both topic and discipline specific –the topics used had previously been identified as particularly pertinent to emergency medicine, and five practicing emergency physicians developed the key features and cases, with group testing and validation of individual work. The examination has a structured oral format, each candidate being tested independently by five different examiners, two cases per examiner. There are no standardized patients. The examiner presents the clinical information, solicits answers, and grades performance. Total examination time is 150 minutes. Results from three consecutive examinations show that the average inter-case correlation is consistently   
> 0.4. Acceptably reliable results (coefficient alpha >0.8) would be achieved with only four or six of our cases, in our examination context. The key feature approach is one way of improving the testing efficiency of structured performance-based examinations.

 

A Comparison of Questions in Two Formats (MCQ and SAQ) on an Anatomy Exam for Medical Students: What Students Can Teach Us
Keywords: multiple choice questions; short answer questions
Authors: Harris, June A., McKay, Donald W.
Institution: Memorial University of Newfoundland

Summary: Summary. Context: Preparation of short answer questions (SAQ) in multiple choice (MCQ) format. Objective: To determine whether the results of questions presented in MCQ format will be different from those same questions presented in SAQ format using student-generated distractors as options. Subjects: First-year medical students at Memorial University of Newfoundland. Materials: A portion of anatomy exam questions prepared using student-generated distractors. Some questions required a short phrase/sentence and other questions could be answered by one word. Methods: Twenty-five percent of questions on two versions of the same exam were given in either MCQ or SAQ format. Incorrect answers from the previous year's exam were used as the distractors in questions converted to the MCQ format. Both MCQs and SAQs were hand-scored using an answer key. Results: A statistically significant difference was found between performance on i) MCQs and SAQs overall and ii) sentence MCQs and sentence SAQs. However, there was no statistically significant difference in performance between one-word MCQs and one-word SAQs. It was observed that in the one-word MCQ format, over twice as many students chose incorrect options and a larger number of the incorrect options were chosen than in the sentence MCQ format. Conclusions: Students performed significantly better on MCQs overall. When the answer was a short phrase/sentence, they performed significantly better on MCQs. However, when the answers were one-word, students performed equally well on both formats. Students are able to generate plausible one-word distractors for MCQs and their responses can be utilized to create high quality examinations.

References:

1. Case SM, Swanson DB, Ripkey DR. Comparison of items in five-option and extended-matching formats for assessment of diagnostic skills. Academic Medicine 1994; 69(10): October Supplement S1-S3.

2. Damjanov I, Fenderson, BA, Veloski JJ, Rubin E. Testing of medical students with open-ended, uncued questions. Human Pathology 1995; 26(4): 362-365.

3. Veloski JJ, Rabinowitz HK, and Robeson MR, Young PR. Patients don't present with five choices: an alternative to multiple-choice tests in assessing physicians' competence. Academic Medicine 1999; 74(5): 539-546.

 

The Structure of Student Achievement, Performance and Clinical Reasoning in Medical School and Licensing Exams: A Factor Analytic Study
Keywords: clinical reasoning, achievement, medical school curriculum
Authors: Terri Collin and Claudio Violato
Institution: University of Calgary

Summary: Purpose: To determine the structure of achievement, performance and clinical reasoning in a medical school program and the Medical Council of Canada (MCC) licensing exams.

Method: Factor analysis (principal component extraction and varimax rotation) was employed to identify the underlying factors of achievement written tests (e.g., cardiovascular, respiratory, blood, renal, pediatrics, etc.), developmental formative tests, and performance tests (e.g., OSCEs in psychiatry, surgery, etc.), and the MCC licensing exams. Both classes that graduated in 2002 and 2003 participated in the present study (n = 163).

Results: Exploratory factor analyses resulted in five factors that accounted for 63% of the total variance: 1) General Formative Achievement (MCC - MCQ items, peds and medicine; some 1st, 2nd and 3rd year achievement tests, formative exams and OSCEs, 2) Clinical, Legal and Policy Analysis (e.g., population health and prevention, clinical legal exams), 3) Human Development and Reproduction (Obs/Gyn, reproduction, human development), 4) Clinical Reasoning (formative and MCC tests of clinical reasoning), and 5) Applied Biomedical Knowledge (e.g., research methods, medical skills in senior years, psychiatry on MCC).

Conclusions: Five identifiable, theoretically meaningful and cohesive factors underlie the education and licensing of physicians in Canada. These factors are based on the development of general achievement, skills, clinical reasoning and applied biomedical knowledge.

 

The Relationship Between Performance in Residency and Scores on the Medical Council of Canada Part 1 Examination
Keywords: residency, postgraduate performance, program directors, Medical Council of Canada
Authors: Woloschuk, W.; Lemay, J.F.; Jones, A.
Institution: University of Calgary

Summary: Purpose: To examine the relationship between resident program directors' assessments of residents and Medical Council of Canada (MCC) Part 1 examination scores.

Method: An instrument measuring several pertinent attributes (medical knowledge, clinical judgment, clinical skills, professional/humanistic qualities, and presentation skills) was mailed at the end of the first postgraduate year to directors of residency programs that were training U of Calgary medical school graduates. Directors rated the residents on each dimension (1=Weaker than most residents; 5=Stronger than most residents). The structure of the instrument was assessed using factor analysis. The reliability of scores was estimated using Cronbach's alpha. Pearson correlations were computed to examine the relationship between directors' ratings of residents and their MCC scores. Results: From three classes (2000-02) program directors rated 50 (71%), 45 (68%) and 40 (57%) residents respectively. A single global factor comprising 9 items (alpha = 0.94) was extracted and accounted for 68% of the variance. An overall cohort mean of 3.69 (SD=.62) was observed. Using the global mean score 3 (2%), 50 (37%) and 82 (61%) residents were rated as weaker, similar and stronger than most residents, respectively. Correlations between global and MCC scores were weak (-0.02 - 0.20) as were correlations between discipline specific global scores and respective MCC subscale scores (Medicine -0.11; Psychiatry -0.13, Pediatrics 0.25).
Conclusions: Program directors, using a global impression, assessed a majority of graduates as "stronger" than most residents encountered. Postgraduate performance is assessed using dimensions different from those measured by the MCC Part 1 exam.

 

Progressive Formative Testing: Which Tests Best Predict Scores on the Medical Council of Canada Licensing Exam?
Keywords: progressive testing, formative tests, knowledge, medical students, Medical Council of Canada
Authors: Woloschuk, W.; Jones, A.
Institution: University of Calgary

Summary: Purpose: To determine which formative tests administered at certain milestones in the training of student physicians best predict scores on the Medical Council of Canada (MCC) Part 1 exam.

Method: Progressive formative testing, known as the Associate Dean's Test (ADT), was introduced into the 3-year medical program to provide students with feedback about their academic progress. The four mandatory tests, which are dissimilar, were administered to the Classes of 2002 and 2003 midway through first year (ADT-1a) and at the end of first (ADT-1b), second (ADT-2) and third (ADT-3) years. The content of the tests examined cumulative knowledge learned in all courses taken up to that point in time. A formative clinical knowledge test (CKT) was also administered at the end of second year. The MCC final score for all 162 (100%) students in this cohort was also obtained. Reliability of formative test scores was estimated using Cronbach's alpha. Linear regression (backwards elimination) was used to determine which formative tests contributed most to the prediction of MCC final scores.

Results: The alpha coefficients for all tests were 0.74 or higher. Correlations between the formative tests and the MCC score ranged from 0.53 (CKT) to 0.66 (ADT-1b & ADT-3). Beta weights of the formative tests in the final regression model were 0.35 (ADT-3), 0.30 (ADT-1b), and 0.18 (ADT-1a); multiple R = 0.73.

Conclusions: Formative tests written near the beginning and at the end of medical school provided relatively high prediction of MCC performance.

 

Using patient video to assess clinical diagnostic skills: evidence for validity
Keywords: assessment, computer-based testing, multimedia, validity
Authors: Steven A. Lieberman; Ann W. Frye; Stephanie D. Litwins; Karen A. Rasmusson; John R. Boulet
Institution: University of Texas Medical Branch and Educational Commission for Foreign Medical Graduates

Summary: Increasing use of computer-based testing and the ease of deploying desktop video make the use of patient video to assess clinical skills quite feasible, yet little evidence for validity has been gathered to date. We captured video clips of patients with abnormal findings on neurologic examination, created parallel text and video-based items related to anatomy and etiology, and constructed parallel forms of a computer-based exam containing alternate versions (text/video) of each item. The exam was administered to 196 first-year students (MS1), 147 fourth-year students (MS4), and 39 internal medicine residents (IMR). Generalizability studies revealed comparable generalizability and dependability coefficients for video- and text-based items. Three-way ANOVA using level of training (MS1, MS4, IMR), medium (text, video), and task (anatomy, etiology) as independent variables showed a training X task interaction, and a main effect for level of training (Table). Overall, both the MS4 and IMR groups significantly outperformed the MS1 group. There was no main effect of medium and no medium X training interaction. The generalizability study results suggest that the choice of medium does not have an appreciable effect on the reliability of examinee scores. The lack of an effect of medium (text/video) or a medium X training interaction suggests that simply replacing a text description of a physical exam finding with a patient video clip does not provide a more valid assessment of skills in neurologic clinical diagnosis. Alternate approaches (e.g., including normal findings, more complex or subtle findings, or background features) may better differentiate examinee competence.

 

Convergent and Divergent Validity of Scores for Kolb's Learning Style Inventory, Felder's Index of Learning Styles, and Riding's Cognitive Styles Analysis Using the Multitrait Multimethod Matrix
Keywords: construct validity, learning style, Kolb
Authors: Cook, D.A.; Smith, A.J.
Institution: Mayo Clinic College of Medicine, Rochester, Minnesota, USA

Summary: Cognitive and learning styles (CLS) research is limited by the lack of evidence supporting valid interpretation of CLS assessments. We sought evidence to support the convergent and divergent validity of scores from three CLS instruments: Felder and Solomon's "Index of Learning Styles" (ILS), Kolb's "Learning Style Inventory" (LSI), and Riding's "Cognitive Styles Analysis" (CSA). The ILS measures four CLS domains: Sensing-Intuitive ("SensInt"), Active-Reflective ("ActRef"), Sequential-Global ("SeqGlob") and Visual-Verbal ("VisVerb"). The LSI appears to overlap the SensInt and ActRef domains and the CSA appears to overlap SeqGlob and VisVerb. METHODS We administered the instruments to 29 Family Medicine and Internal Medicine residents. We calculated correlations using Pearson's r and applied the Multitrait Multimethod Matrix (Campbell and Fiske, 1959) to evaluate convergent and divergent validity.

Results: Domains are independent except for the ILS SeqGlob. Convergent validity is demonstrated when correlation between the same construct across different methods (dark shading) is significant and large. This is present for only ActRef. Divergent validity is demonstrated when correlation between the same construct across different methods (dark shading) is greater than correlations between different constructs within one method (light shading). Again, only ActRef meets this criteria.

Conclusion: This evidence supports the validity of interpretations using the ActRef domains of the ILS and LSI. It fails to support the SensInt domains of the ILS and LSI or the SeqGlob and VisVerb domains of the ILS and CSA. This could be due to differences in the underlying constructs (likely), flaws in the constructs or instruments, or both.

 

Evaluating physician canmeds competencies in neonatal-perinatal medicine using an objective structured clinical examination (OSCE)
Keywords: OSCE,Neonatal,CanMEDS Roles
Authors: Brian Simmons, Ann Jefferies, Marc Blayney, Kyong Lee, Henry Roukema, Martin Skidmore Jodi McIlroy, Diana Tabak
Institution: Depts. Of Paediatrics, University of Toronto, Toronto; University of Ottawa, Ottawa; McMaster University, Hamilton; University of Western Ontario, London; Wilson Centre for Research in Education, University of Toronto, Toronto, ON, Canada.

Summary: Bachground: The Royal College of Physicians and Surgeons of Canada defined 7 CanMEDS competencies – Medical Expert, Communicator, Collaborator, Manager, Health Advocate, Scholar and Professional. Training programs are challenged to assess these competencies.

Objectives: To design an OSCE for neonatal-perinatal medicine trainees incorporating these competencies.

Design/Methods: Ten 12-minute stations were developed - 6 used standardized parents (SPs) and 4 used standardized health professionals (SHPs) played by health workers. Examiners completed station specific checklists, CanMEDS global ratings and an overall global rating. SPs/SHPs completed communication global ratings.

Results: 24 trainees participated. Each station assessed 4-6 competencies. There was significant correlation between checklist scores (67+/-9, mean +/-SD) and examiners' overall global scores (66+/-14, r = 0.97), checklist scores and medical expert global scores (70=/-12, r = 0.96), communicator global scores (72+/-15, r=0.92) and SPs overall global ratings (62+/-14, r = 0.92). Inter-station alpha coefficient range was 0.80-0.88.The CanMEDS ratings differentiated first and second year trainees,

Conclusions: CanMEDS competencies were evaluated with a high degree of reliability /validity. The OSCE allowed assessment of competencies not easily assessed through traditional examinations.

 

Five years of Progress Testing at Charité Universitätsmedizin Berlin, Germany
Keywords: Progress Test, Educational Measurement, Curriculum, Assessment
Authors: Föller, T.; Brauns, K.; Fuhrmann, S.; Hanfler, S.; Hoffmann, J.; Kölbel, S.; Mertens, A.; Müller, B.; Nouns, Z.; Wieland, D.; Osterberg, K.
Institution: AG Progress Test Medizin, Charité Universitätsmedizin Berlin, Schuhmannstr. 20/21, 10117 Berlin, Germany

Summary: Aim: To present the results of 5 years of progress testing at the Charité Universitätsmedizin Berlin, Germany.

Summary of Work: Five years ago the Charité Universitätsmedizin Berlin, Germany, implemented the "Progress Test Medizine" (PTM), based on the Maastricht progress test [1]. Nine tests at the "Reformstudiengang Medizin" (Reformed Curriculum, RC) and four at the traditional curriculum (TC) have been carried out. The PTM is now compiled in cooperation with the University of Witten-Herdecke and expanded to four more faculties. The PTM is administered twice a year to all students of the RC and to all students of the participating classes of the TC, and consists of 200 Multiple-Choice. Test-Items (one-best-answer plus don't-know-option), based on the level of knowledge at graduation. Test score is calculated as correct minus incorrect answers, "don't-know" doesn't change the score. Summary of Results: Knowledge grows in the RC and TC, but there is a difference in the growth-curves between the RC and TC for the first two years of the curriculum and the percentage of the test score in the corresponding classes (see figure). One reason is the different amount of clinical content in the first years of the curricula. The split-half reliabilities for correct answers per semester ranges between 0.89 and 0.96. Conclusion: The PTM is a objective, reliable and sufficiently valid tool for the assessment of growth of knowledge, but needs adjustment for the content of basic and clinical sciences.

1. van der Vleuten CPM, Verwijnen GM, Wijnen WHFM. Fifteen years of experience with progress testing in a problem-based learning curriculum. Medical Teacher 1996; 18(2): 103-109.

 

The Universities Medical Assessment Partnership (UMAP): First use in a 'high stakes' examination
Keywords: progress test, mcq, assessment
Authors: Owen A, Byrne GJ, Mahadev GK, Benbow E, O'Neill PA on behalf of the UMAP partners
Institution: University of Manchester

Summary: Background: The UMAP Project, founded in 2002 is an inter-university collaboration aiming to improve and regulate the quality and reliability of written assessment items for undergraduate medical students for five medical schools in England. In January 2004, the first 'high stakes' examination, was delivered using questions derived from the UMAP bank. An examination of 125 one best answer from five MCQs was used as a replacement for the traditional 250 true/false question progress test at Manchester Medical School.

Methods: All 3rd (n=426) and 4th year (n=388) medical students were examined. Mean scores and standard deviations were calculated for all cohorts. Reliability was measured (Cronbach's alpha). Discrimination was assessed by comparing the highest and lowest scoring quintiles in each year.

Results: Mean scores for year four (72.3, 58%, SD 8.2) were higher than year 3 (60.7, 49%, SD 9.4) (p<0.05). Cronbach's &#945; measurement of reliability was 0.78 for year 4 and 0.72 for year 3. Higher scores were achieved by the 1st quintile of students in 104 questions (70%) overall (Year 4=77%, Year 3=75%).

Discussion: MCQ questions derived from the UMAP bank appear discriminatory and an appropriate alternative to traditional true-false items. Reliability may be improved by increasing the number of questions used in each progress test.

 

Sharing resources for UK undergraduate written assessments – One year of UMAP
Keywords: assessment, partnership, mcq, emq
Authors: Byrne GJ, Owen A, Newble D, Barton R, Garden A, Roberts T, O'Neill PA on behalf of the UMAP partners
Institution: University of Manchester

Summary: Introduction: The UMAP Project, founded in 2002, is an inter-university collaboration attempting to deliver high-quality, reliable, written assessments to medical students in England. Having established methods of inter-site question writing, the project is now focused improving the quality of questions across each site to enable use of items in partner school examinations.

Methods: One best answer from five and extended matching questions are written at site-specific workshops. Each workshop consists of a formal teaching period based on the principles distilled from continuing consultation with centres of excellence in the Netherlands and the United States. Questions are written by pairs of experienced clinicians and submitted in hand-written form. Questions are then sent for review with each site-specific review panel consisting of experienced question writers who focus on the relevance, structure and difficulty of each item. Questions are tagged in accordance with an agreed two-dimensional matrix covering the undergraduate curricula of all five partner schools and based on the GMC 'Tomorrows Doctors'.

Results: UMAP has brought together over 150 question writers at the five partner schools. Over one thousand items have been written including 750 EMQs. UMAP is establishing a method for item generation amongst the schools which is robust and uniform. In January 2004, the bank was utilised for the first time in a 'high stakes' examination in Manchester.

Discussion: UMAP demonstrates the benefits of inter-medical school collaboration in the delivery of high quality written assessment and is establishing a high-quality resource that will be valuable in forthcoming years.

 

Seven years experience of progress testing in Manchester UK
Keywords: progress test, MCQ,
Authors: Mahadev GK, O'Neill PA, Owen AC, McCardle P, Benbow E, Byrne GJ
Institution: University of Manchester

Summary: Background: The progress test, developed in Maastricht, has been a key assessment for the Manchester clinical undergraduate curriculum since 1997. We hypothesized that such a test would be reliable and that each undergraduate cohort would perform better on each successive exam, with senior scoring higher than junior students

Methods: Performance for five clinical undergraduate student cohorts of the curriculum years between 1997 and 2001 (n=1947) was analysed. Each student took five progress tests over the last three years of the MB ChB programme. Each test consisted of 250 True/False questions representative of the 4 core modules within the problem-based curriculum to achieve face validity. For each student cohort, mean and standard deviations were calculated and scores compared (students t-test). The reliability of each test (Cronbach) was calculated and estimates of the average 'test-by-test' change were derived ( STATA version 6.0).

Results: Each student cohort improved on their previous average (p=0.004). Moreover, during each round of testing, senior students scored higher than less senior students ({year 4 > year 3, p = 0.045}, {year 5 > year 3, p =0.002}, {year 5 > year 4, p=0.06}). Average reliability was 0.84 (range 0.79-0.89). Average test-by-test changes for each cohort demonstrated significant linear trends (P<0.001 in all cases)

Conclusion: Despite modification from the original Maastricht model, the progress test in Manchester is a valid, reliable and reproducible method for examining the acquisition of knowledge across the clinical undergraduate curriculum.

 

The Reliability of the Operative Competence Assessment for Surgical Trainees Using Video and Direct Observation
Keywords: In-service training, clinical competence, educational measurement
Authors: Burt CG, Ricketts C, Grant JR, Wilkins DC.
Institution: Peninsula Medical School

Summary: Objectives: Case variation 1 and subjective judgments affect reliability when assessing surgical trainees. The aim was to investigate the reliability of the Operative Competence Assessment using video and direct observation.

Methods: A validated global rating scale called the Operative Competence Assessment was used to score surgeons performing inguinal hernia repairs. Surgical assessors blinded to the operator's seniority scored the edited operative videos only if they considered that they had obtained sufficient information. Other surgical trainees were scored by direct observation in theatre, where external surgeons participated to reduce rater bias. The results were analysed using Generalisability Theory.

Results: Seventy assessors (75%) felt able to judge the videos and scored all 6 operations. The assessors' contribution to variance was 0.6%, indicating a negligible hawk or dove effect. 78.7% of variance was due to the assessor-surgeon interaction, exceeding the 20.7% variance between the operating surgeons. The assessors' comments suggest that their judgments were based on a comparison with their own technique. Thirteen assessors scored 5 surgeons by direct observation in theatre. The inter-surgeon variation was 91.0% and the case variation was 9.0%. The Generalisability coefficients for the theatre assessment sessions ranged from 0.90 to 0.99.

Conclusions: Assessment of surgical skill by video alone is insufficient. Direct observation provides additional information necessary for accurate assessment. The Operative Competence Assessment was a highly reliable model when used in the operating theatre.

1. Norman GR, Tugwell P, Feightner JW et al. Knowledge and clinical problem-solving. Med Educ 1985; 19: 344-56.

 

Detecting cheating in medical examinations
Keywords: Cheating; examinations; postgraduate
Authors: McManus, Chris
Institution: University College London

Summary: In any high-stakes examination, some candidates will be tempted to pass by cheating, which can take many forms (Cizek 1999). Many undergraduate and postgraduate medical examinations are multiple-choice, and are taken by candidates sitting at desks in large examination halls. Computer-marked answer sheets make it surprisingly easy for candidates to see the answers of candidates seated alongside or in front, so there is a risk that a candidate's answers are not entirely their own. Any form of cheating threatens an examination's validity, since the mark scored does not relate to the candidate's true knowledge. It is also a threat to health care, since candidates who cheat are certified as competent, despite having insufficient knowledge. Statistical methods for detection of cheating look at the statistical pattern of results, assessing whether candidates are unduly similar to one another, particularly if they are adjacent in their seating. Although this can be done using known theoretical distributions of statistical distributions (e.g. Frary, Tideman, & Watts 1997), methods using empirical distributions make fewer assumptions. I will describe the application of a program called Acinonyx to a medical examination. The program implements and extends the methods of Angoff (1974), which looks at the answers of all possible pairs of candidates. That number can be large, as if 1000 candidates take an exam there are 499,500 pairs, and hence the computational load is large, and the high risk of type I errors needs to be taken into account.

 

Changes in standard of candidates taking the
Keywords: Postgraduate examinations; standards; marker questions; MRCP(UK)
Authors: Chris McManus, Jennifer Mollon, Oliver Duke, Allister Vale
Institution: MRCP(UK) Central Office, St Andrews Place, London NW1 4LE, UK

Summary: Maintenance of standards is a problem for postgraduate medical examinations, particularly if norm-referencing is the only method of standard setting. The MRCP(UK) Part 1 Examination includes marker questions in each diet, which are unchanged from questions used in a previous diet. Here we describe two complementary studies of marker questions in diets of the Examination over the years 1985 to 2002, to assess whether standards have changed. Study 1analysed routinely collected information on the performance of 4405 marker items, using a statistical model to assess changes in performance across diets. Study 2 compared performance of individual candidates on 28 individual marker items which were shared by the 1996/2 and 2001/3 diets. Study 1 found evidence that candidate performance on the MRCP(UK) Part 1 Examination improved gradually between 1985 and 1997, and then declined sharply until 2001. The 'dog-leg' at 1997/3 did not result from changes in Examination Regulations or candidate mix. Study 2 confirmed that 2001/3 performance was significantly worse than 1996/3 performance in graduates of UK medical schools, and that passing candidates in 2001/3 performed less well than passing candidates in 1996/2. Setting the pass mark by norm-referencing allowed candidates to pass the Examination who had performed less well than previous cohorts. As a result, the current MRCP(UK) Part 1 and Part 2 Examinations use criterion-referencing. The reasons for the declining performance of UK medical school graduates are not clear, but have wider implications for medical education. Further studies are needed of other postgraduate and undergraduate examinations.

 

Usefulness of a course on communication skills for first year residents
Keywords: Communication skills course; residents; effectiveness;speciality difference
Authors: Nogueras A., Casanovas A:, Bernaus M., Claries X*., De Nadal J.
Institution: Institut Universitari Parc Taulí(UAB) *Institut d'Estudis de la Salut. Barcelona (Spain).

Summary: Introduction: Students at faculties of medicine in Spain are not offered specific training in communication, in spite of the fact that these skills are perceived as vital in medical practice. Evaluating training systems is by no means an easy task.

Objective: To evaluate the effectiveness of a course on communication skills

Material and Methods Location: Sabadell (Spain) Hospital reference for 380,000 inhabitants, situated 30 Km from Barcelona.

Personnel: medical post grade programme for 15 specialities with 31 R1 Subject: 42 first year residents voluntary engaged in a training program in doctor-patient communication skills over years 2000 - 2001

Material: 1) Communication skills course, 20 hours over a 5 days period not during working time, based on simulated patients and role playing. 2) A validated videotape with poor model communication skills in a simulated encounter between a doctor in the emergency service, dealing with a complaint from a patient's relative. As far as communication skills are concerned, there are many arguments on the video which can be improved

Procedure: Participants watched the video both before and after the course, and scored the doctor's communication skills on a scale from 0 – 10 Statistical analysis: SPSS, Mann Whitney, Anova

Results: Attendance 100% Score (SD) of the "poor" video by residents before and after in fig. 1, with a p=,002. Looking for difference between specialities, family residents scored better

Conclusions:

1. A communication course as described improves residents ability to recognise a poor communication skill
2. Specialities differ on usefulness of the course

 

Improving standard setting for Key Feature Problems in the certification examination for Australian general practice
Keywords: Key Feature Problems, standard settingmethods
Authors: Farmer, EA., Hinchy J
Institution: Royal Australian College of General Practitioners

Summary: The Royal Australian College of General Practitioners (RACGP) is solely responsible for certifying competence for unsupervised Australian general practice. It has held a certifying examination since 1967, consisting of written and clinical components, which became a legislative requirement in 1996. A Key Feature Problems (KFP) paper was introduced into the written tests in 1997 to test clinical decision-making in a three-hour examination consisting of 25 questions, held twice a year. Each question has two to four related parts and demands multiple answers that are either written in or selected from a long menu. Standard setting of all examination components was introduced in 1998, and replaced the previously fixed pass mark of 66%. The paper will present an overview of our experience and results from standard setting with the modified Angoff and the Ebel method, the two methods used between 1998 and 2003 for the KFP component. Conceptually these methods assume that each answer is independent and is scored as simply right or wrong, as in 'single best answer' multiple-choice tests. A new method is being trialled that allows for the more complex scoring key of the KFP approach in which multiple correct answers are usually required and answers may not be independent of each other. We will describe the new method and discuss critical issues concerning the standard-setting judgements produced for each of the three methods.

 

Feedback on bedside teaching skills using a portable digital recording device
Keywords: reflective teaching, bedside teaching, feedback, patient awareness
Authors: Hamouda, A.
Institution: Imperial College

Summary: Background: Effective bedside teaching is key to undergraduate training, but clinicians seldom have opportunity to observe their own teaching style, receive structured feedback or examine their teaching from the standpoint of learners and patients. No existing rating scales include the patient's perspective.

Intervention: We describe a simulated bedside teaching environment, using a Simulated Patient (SP) and 4 Simulated Learners (SL) (pre-briefed medical students, all trained to use a modified Stanford teaching rating scale1 to standardise responses and increase awareness). Each 10 minute clinical teaching scenario is recorded, using portable, securely-encrypted digital technology (the Virtual Chaperone). The tutor receives structured feedback from both SL and SP (reviewing videorecording as appropriate), then all participants' perspectives are explored using individual and group interviews. Standard qualitative techniques are used to analyse data.

Results: Interim results from the study (currently in progress, anticipated completion June 2004) are very positive. Differing teaching styles can either exacerbate or alleviate patient anxiety. SLs and SP identified both strengths and weaknesses in each tutor's technique, and established a climate of patient-centred awareness. Sessions have promoted both tutors' and learners' understanding of effective teaching, and the need to include the patient's concerns. Tutors expressed anxiety about feedback, but most derived benefit at the end of the session.

Conclusion: Reflective bedside teaching using simulated patients, simulated learners, video-based feedback and objective rating scales can enhance understanding of the teaching process. This can provide tutors with valuable insight into their own teaching style and its potential impact on both learners and patients.

References: 1. Factorial validation of a widely disseminated educational framework for evaluating clinical teachers. Academic Medicine 1998 Jun, 73(6):688-95. Litzelman DK, Stratos GA, Marriott DJ, Skeff KM.

 

Expertise and the Accuracy of Direct Observation
Keywords: Clinical skills, evaluation, direct observation
Authors: Holmboe, E.; Hawkins RE; Huot SJ
Institution: Yale University, National Board of Medical Examiners

Summary: Background: Experts are often used to set the standards for ratings of clinical skills. The purpose of our study was to assess experts' accuracy in the observation of medical interviewing, physical exam, and counseling.

Methods: Standardized residents and patients were used to develop a series of videotapes depicting a PGY2 medicine resident performing a medical interview, pexam, or counseling at varying levels of proficiency. Each tape was scripted to contain specific errors of either commission or omission. Experts who taught and published in medical interviewing, pexam, and counseling were recruited to rate three tapes. A 9-point miniCEX rating form was used where 1-3 is unsatisfactory, 4-6 satisfactory, and 7-9 superior. Experts were also asked to write in free text any errors or deficiencies observed on the tapes. Accuracy was determined by the number and percent of errors correctly listed on the miniCEX form.

Results: 7 experts participated: 3 for medical interviewing, 2 for pexam, and 2 for counseling. Ratings were markedly divergent for the three experts reviewing the medical interviewing tapes. All three experts noted 25% or fewer of the errors committed. Both counseling experts noted 33% or fewer of the errors. The pexam experts were more accurate with one expert identifying 81% of errors. Despite the lack of noting specific errors, 6 of 7 experts rated the unsatisfactory (level 1) tape as unsatisfactory (1-3).

Conclusions: Although expertise did not appear to enhance the accurate detection of specific errors, the experts, except one, were able to successfully discriminate between levels of proficiency. Detection of specific errors, however, is essential for proper feedback and professional development of trainees' clinical skills.

 

Student's self-assessment of Adult Basic Life Support
Keywords: self assessment, adult basic Life support, medical students
Authors: Vnuk, A, Owen H.
Institution: Flinders University

Summary: Healthcare undergoes continuous change so medical students must develop life long learning skills as well as current knowledge. An important aspect of this is ability to self-assess performance. The goal of this study was to investigate how well students could assess their own performance in a simulated cardiac arrest at the end of a Basic Life Support (BLS) course. Ninety-five students received 4 one hour practical teaching sessions covering the DR ABC algorithm (1). Students then participated in a short CPR scenario in which sudden cardiac arrest was simulated using a ResusciAnne with SkillGuide (Laerdal, Norway). A videorecording was made of each student's CPR using a DVD camcorder. Immediately after completing the task, students rated their performance using a 6 point descriptive scale ranging from "completely useless" to "perfect". The students were then shown the videorecording and asked to re-rate themselves. A single expert assessor then viewed all the videorecordings and rated the students using the same scale, according to guidelines (1). Students' assessments did not agree with the expert assessor either before viewing the video (weighted kappa = 0.03) or after viewing the video (weighted kappa = 0.002). Students tended to assign higher scores than the expert. We had anticipated that observing themselves on the video would improve the students' self-assessments. Our presentation will discuss possible reasons for this, including lack of benchmarking and the opportunity for single viewing only by the students and how student self-assessment might be improved. Resuscitation 46 (2000) 29-71. Part 3: Adult Basic Life Support

 

Difference between choosing an answer and constructing an answer
Keywords: assessment, multiple choice questions, open questions
Authors: Fernández Garza, N.
Institution: Facultad de Medicina de la Universidad Autónoma de Nuevo León, Monterrey, Nuevo León, México

Summary: Assessment using selected-response item formats are widely used in medical education. These methods required examinees to select rather than produce responses. However, in the clinical practice physicians do not have the alternative to select a diagnosis or treatment among given options, here, in the true life, physicians must construct an answer through the information he/she has. We compare in a 20 weeks physiology course the grades obtained by students assessed using multiple choice questions (MCQ) and open questions (OQ). Four pairs of assessments —MCQ and OQ— were applied to 289 students and graded using a 0-100 scale. The average grades for MCQ were 59, 53, 55 and 55, and for the OQ 38, 37, 43 and 46. All students that passed the OQ assessment passed the MCQ, but not on the contrary. The number of students that did it better in the OQ assessment than in the MCQ were 18, 24, 49 and 74. These results show that in the OQ assessment there is an improvement in the grades as well as a rise in the number of students that did it better than in the MCQ. That is probably because students are not familiar with constructing an answer but they learn how to do it, and we think this is a better method to assess students knowledge because here students must know the answer, with MCQ even if the student do not know the answer he/she can choose one.

 

Dual Entry pathways in an Undergraduate Medical Course
Keywords: student selection, assessment, evaluation
Authors: Elliott, Susan, L.; Dodds, Agnes, E.
Institution: University of Melbourne

Summary: The University of Melbourne medical course has dual entry pathways designed to increase diversity in the student body. School-leavers (n=120), undertake a six year course and graduate-entrants, (n=60), a four and a half year course, in which they are exempt from the first half year (semester) and from a research year between years three and four. Both groups are selected on the basis of prior academic performance and results of aptitude. We report on the academic performance and course perception evaluations of both groups using data from one cohort over five years.

Table 1 gives the mean percentage scores for the two groups in their most recent examinations. As the table shows there is little difference in scores, either for clinical performance or written assessment tasks.

Table 1: Mean scores for School-leaver and graduate-entry students in one 5th year rotation.
Specialty Health Rotation (Psychiatry, Aged & Palliative Care, Rural Medicine) School-leaver Entry Graduate-Entry

 

Mean (s.d.) Mean (s.d.)
Clinical OSCE
65.4 (10.9)
67.4 (9.3)
Clinical Written 68.1 (6.8) 67.7 (5.4)

Data from all previous years will demonstrate that, as in Table 1, no systematic differences have been found across 4 years of biological science, clinical performance and written clinical examinations. In addition, mean ratings of course subject evaluations by students have shown few differences. Possible reasons for the similarity between the two groups will be explored including curriculum/assessment alignment and the role of competing priorities at different life stages. These findings will be of interest to schools wishing to broaden their selection criteria.

 

On-line administration and marking of Modified Essay Question Paper
Keywords: Education, medical, assessment, on-line technology
Authors: Davy, P, Zhou, J., Miller, N. and Clarke, R.
Institution: University of Sydney

Summary: Modified Essay Question (MEQ) papers provide a valid and reliable means of assessing problem-based learning (Felleti, 1980). However, the invigilation (proctoring) requirements are onerous, because students may not turn forward to discover what happens in the case, nor may they turn back to amend previous answers on receipt of further information. The marking requirements are also substantial, particularly with large cohorts of students. On-line administration of MEQs can overcome the invigilation problem if answers, once submitted, are unretrievable, and if submission of an answer is required before access to the next question. On-line submission also overcomes legibility problems. On-line marking of MEQs can facilitate and expedite the marking process, for example by using tick-boxes for recording scores or grades. It enables examiner feedback comments to be appended to students' answers. Answers can be sorted by score or grade, and then reviewed for consistency of marking, and quality control can be undertaken by random re-presentation of scripts for duplicate-marking. Two examiners can share a marking load, and inter-rater reliability can be checked by sample duplicate-marking. This presentation will discuss the consequences and implications for both students and teachers of on-line administration and marking of MEQs.

Reference: Felleti, G. (1980) 'Reliability and validity studies on modified essay questions' Journal of Medical Education 55: 933-941.

 

An exploration of the relationship between peer assessment and self-regulated learning
Keywords: Education, medical, assessment, self-regulated learning
Authors: Davy, P.
Institution: University of Sydney

Summary: The University of Sydney's medical program (USydMP) incorporates a problem based learning (PBL) approach in both campus and clinical contexts over four years. Students in Year 1 of the USydMP sit two written formative assessments that primarily assess knowledge of the basic and clinical sciences learned in the first year of the program. Both written formative assessments include Modified Essay Question (MEQ) papers, which are peer-marked by students. Peer assessment occurs in PBL tutorial groups, allowing students to access their peers as a resource. To assist in the peer-marking task, each student is provided with a printed set of model answers supplied by item writers. In addition, each PBL group has access to online support from discipline (clinical and basic science) experts, usually the MEQ item writer. Peer assessment in this context provides opportunities for students to reflect on their learning strategies with regard to basic and clinical sciences content. Students may then regulate their learning strategies in order to achieve multiple goals including improving their own knowledge of the basic and clinical sciences and providing specific feedback to peers. Peer assessment has been characterized as supporting self directed learning and assisting in the establishment of conducive learning environments required for the encouragement of self-regulated learning. This presentation explores the relationship between peer assessment and self-regulated learning, and invites discussion.

 

On-line assessment using short answer questions
Keywords: On-line, assessment, short answer questions
Authors: Devitt, P. Palmer, E. De Young, N.
Institution: University of Adelaide

Summary: To test understanding of concepts, examiners often use short answer questions (SAQs). Disadvantages of this form of testing include observer variability in marking and the time taken to perform the marking. Multiple choice questions (MCQs) are more robust in terms of marking reliability and can be made available and marked on-line. In general, MCQs test little more than acquired knowledge, and an ideal testing system would be one that had the reliability of an MCQ with the flexibility of an SAQ, with objective measurement of both knowledge and understanding. We have set out to formally evaluate a system of objective and automated marking of an on-line SAQ examination. To create an on-line method of assessing short essay questions for formative assessment and to compare the effectiveness of such a method with traditional marking. Six short essay questions were submitted to 16 students selected randomly from 4th year medical students. The papers were marked by hand and then marked retrospectively using a computerised system. Finally the papers were re-marked by the assessor. The correlation between hand-marked and automatically marked questions ranged from 0.42 to 0.93 (mean correlation 0.69). The assessor's hand marking compared over two time periods correlated within a range of 0.27 to 0.89 (mean correlation 0.59). The mean difference between hand-marked and computer marked questions was 10%. Short essay questions can be assessed using on-line tools. Such systems can be used to reduce the workload of Faculty staff in marking and maintain the objectivity and accuracy of marking.

 

From the classroom to the clinical environment. Are third year students prepared for this transition?
Keywords: curriculum evaluation, undergraduate medical education, clinical skills
Authors: Kiegaldie, D. and Lindley, J.
Institution: Monash University

Summary: The medical course at Monash University is currently undergoing major curriculum development and change and in 2002 a new five-year undergraduate medical degree commenced. The first cohort of students have completed two years of the course and have now entered their first clinical year where they are introduced to structured teaching and learning activities in the clinical environment. This paper will discuss results from an evaluation of student's perceptions of their preparedness for the transition into the clinical setting. A self-rated survey instrument was developed with items correlated to the clinical skills curriculum. The survey was administered at the end of second year and gathered student's views on the success of the early years in preparing them for immersion in the clinical setting. Results revealed that students are able to identify strengths and weakness in terms of their clinical skills preparation. Mapping results of data analysis to the curriculum content for the early years of the course can be used for quality improvement processes and to guide curriculum development. A useful profile of the clinical skills developed by a typical student can be established to assist clinical supervisors when teaching in the clinical years and to facilitate the transition of students from the classroom to the clinical environment. This is an innovative approach to reviewing and improving curriculum vertically across the entire course.

 

The Borderline Candidate – a Distinct Species?
Keywords: assessment, borderline, standard setting
Authors: Sturmberg, J.; Hinchy, J.; Farmer, E.
Institution: RACGP

Summary: Criterion-based standard setting is used for high-stakes examinations such as the certification examination run by the Royal Australian College of General Practitioners (RACGP), which is solely responsible for certifying competence for unsupervised Australian general practice. The examination has an OSCE component and a written component consisting of two segments, the Applied Knowledge Test (AKT) and Key Feature Problems (KFP). Standard setting judges for the written components determine the expected score of a minimally competent (Borderline) candidate.

Standard setters over the last ten Examination administrations have significantly differed in their judgements despite sustained efforts to define the Borderline candidate. Judges' comments and analysis of the underlying concept of competency suggests that the Borderline candidate does not lie at the intersection of competent and incompetent, but instead cover a range of performance over which competency decisions necessarily remain indeterminate. Two distinct kinds of borderline candidate emerge: first, the 'minimal borderline' candidate whose performance lies at the boundary with the clearly incompetent candidate, and second the 'sufficient borderline' candidate whose performance lies at the boundary with the clearly competent candidate. Clarification of this distinction offers more stable standard setting. Implications of this view are explored including a more rational approach for standard setting, and the assessment of the magnitude of type I and II error rates with varying pass/fail decisions. In addition empirical evidence is presented supporting the existence of the Borderline candidate as a distinct category of performance.

 

Roles of age and examination experience in Key Feature Problem performance in the certification examination for Australian general practice
Keywords: Performance predictors, General practice, Australia, Logistic regression
Authors: Hinchy, J.
Institution: Royal Australian College of General Practitioners

Summary: The Royal Australian College of General Practitioners (RACGP) has sole responsibility for certifying competence for unsupervised Australian general practice. The certifying examination includes Key Feature Problems (KFP) designed to test clinical decision-making. The three-hour KFP component contains 25 problems, each consisting of a clinical scenario followed by two to four related questions. Candidates either write in their answers or select them from a menu of options. Previous work had identified several candidate-related factors that significantly affected performance on the KFP component, including candidate age – older candidates tend to perform less well than do younger candidates. Candidate age is also highly correlated with time since completion of studies. Therefore, age-related performance effects could reflect inherent candidate characteristics that affect clinical decision-making performance, at least on the KFP. Alternatively, recency of study experience and associated examination skills could account for the poorer performance of older candidates. This paper contrasts the two explanations examining the last ten administrations (1999-2003) of the KFP component over 3576 candidates and presenting evidence about the relative roles of these two factors. The influence of subsequent study experience is also investigated. Finally, different analytic strategies for examining performance outcomes are discussed.

 

Facilitator evaluation in PBL: What can we learn?
Keywords: Evaluation, value, facilitator feedback, PBL
Authors: van Wyk, J.
Institution: Nelson R Mandela School Of Medicine, Faculty of Health Science

Summary: Introduction: In January 2001, the Nelson R. Mandela School of Medicine implemented a Problem-based learning curriculum (PBL) after nearly 50 years of traditional teaching. Staff members are trained during a 4-day workshop for their role as facilitators. Students are briefed about what they should expect from their facilitators. Students are asked at the end of each 6-week theme, to rate their facilitators using a Likert scale and to provide them with tips to improve their facilitation skills. Feedback from students is analysed, individual reports constructed and a covering letter highlighting the main areas for improvement is sent to facilitators. Facilitators are encouraged to reflect on their practice and to include some of these reports in their teaching portfolios. A recent survey was conducted to establish the value of the feedback received by facilitators. More surveys were returned by non-medically trained facilitators than medically trained ones.

Results: 66% of the group indicated that the feedback received was useful. Some saying that it provided a non-threatening way of identifying their shortcomings. 17% thought the feedback was not useful as students were in too much of a hurry when completing the evaluation and that the feedback was often too late to impact on their practice. These results indicate that there are some value in the continued use of this form of feedback to facilitators, but that logistical aspects needs serious attention.

 

Evaluation of the basic cycle years of the Medical Course at the Faculty of Medicine of the University of Porto (FMUP)
Keywords: evaluation, basic disciplines, component analysis
Authors: Tavares, M.A.F., Bastos, A., Guimarães, L., Loureiro, E., Silva, M.C.
Institution: Faculty of Medicine University of Porto, School of High Education, Politechnic Institute, Viana do Castelo and Institute for Biomedical Sciences, University of Porto. Porto, Portugal

Summary: Introduction: Within the scope of quality programs, an evaluation process started at FMUP (2002/2003 academic year). A study of consistency of a self-administered questionnaire for the ranking of basic cycle disciplines was performed.

Methods: All students of the basic cycle filled a self-administered questionnaire. The 36 items aimed to evaluate planning, resources, teacher/students interaction, students self-evaluation. A principal component analysis was used to extract a reduced set of coherent subsets of items, based on the overall answer means across the disciplines for the corresponding year, differences being tested by a repeated measures ANOVA.

Results: From 648 students, 569 (87.8%) answered, been excluded 24 students and two items for missing information; means were calculated for the remaining 34 items (545 cases). A preliminary principal components extraction was used to estimate the number of factors and presence of outliers. Seven items and 3 outlying observations were further excluded after analysis of the square multiple correlation between items and factors. Orthogonal rotation described 3 meaningful factors: "teaching evaluation", "course planning/resources", "students self-evaluation". Internal consistency was high - Cronbach's alpha of 0.93, 0.87, and 0.81, respectively. Stability was tested for each discipline, values of alpha ranging from 0.67 to 0.92. Anatomy (1st year) failed this pattern. The 3 factors were sensitive to evaluation of the disciplines (ANOVA model); Biochemistry, Clinical Anatomy, Clinical Psychology, and Biopathology ranked higher than the remaining disciplines of each year.

Conclusions: The three-component solution can be used for ranking purposes and academic planning. (Supported by Fundação Calouste Gulbenkian and FMUP)

 

Development of a computerised free-text Progress Test for undergraduate medical students
Keywords: Progress Test, curriculum outcomes, core knowledge, computerised assessment, free text assessment
Authors: McEwen, J. Murphy, B. Pippard, M.J.
Institution: University of Dundee, Ninewells Hospital and Medical School, Dundee, Scotland, DD1 9SY, UK

Summary: We piloted a 270-question free-text Progress Test for undergraduate medical students in 2001, testing core curriculum knowledge (Hunter et al, 2002). This test has evolved and become a summative component of the outcome-based Y5 final Portfolio exam. A major difficulty during 2001 and 2002 was the considerable effort required in handling manuscript exam papers. We first needed a 'moderation' process involving medically-qualified faculty to examine final year scripts and refine the template of anticipated correct answers; then lay faculty members could mark scripts. In 2003, after pilot work demonstrated its potential, we used a computerised free-text assessment system to administer, moderate and mark the exam (Mitchell et al, 2003). This proved to be robust and accurate, allowing moderation and marking over a 2-3 day period. The Progress Test provides results for individual students with their class ranking, and also specific cohort information for all five years, as seen in the following example: Two of the parameters in the Glasgow Coma Scale are motor response and eye opening. What is the third?

 

% correct
Y1 Y2 Y3 Y4 Y5
7 10 49 74 82

The results can also be analysed to show progression in subject areas and individual curriculum outcomes.

References: 1. Hunter I, Murphy B, McEwen J, Friedman M. Experience with a free-text pilot Progress Test for undergraduate medical students. Ottawa Conference on Medical Education, July 2002.

2. Mitchell T, Aldridge N, Williamson W, Broomhead P. Computer Based Testing of Medical Knowledge. Seventh International Computer Assisted Assessment Conference, Loughborough, UK, July 2003.

 

Benchmarking Facilitated: Comparing Student Performances Across Borders
Keywords: facilitating benchmarking internationally
Authors: Hazlett, C.; Cook, D.; Dauphinee, D
Institution: Chinese University of Hong Kong

Summary: Demands, in relation to quality assurance, require medical schools to benchmark their educational programmes. In countries with nation-wide professional accreditation and licensing bodies, a school's accreditation standing or the performances of its graduates in annual licensing examinations often are referenced. In countries without parallel professional services, schools frequently seek review by an internationally recognized body (e.g., General Medical Council) and/or encourage their graduates to take a well-recognized licensing examination (e.g., one set by the National Board of Medical Examiners). However, these types of benchmarks do not enable the school to benchmark student progress throughout its medical curriculum. Given this limitation, a Hong Kong medical school collaborated with a school in Canada and the Medical Council of Canada to use Canadian developed items for constituting three-eights or less of all questions in end-of-year discipline-based examinations. The Canadian items had desired discrimination and difficulty properties. Comparing students' performances on the Canadian and Hong Kong developed items provided useful insight into students' progress and helped inform accreditation site visitors as to the adequacy of the school's curriculum design and delivery. Given this value-added approach to benchmarking, sharing assessments with other medical schools was undertaken. Today, the expanded partnership includes twelve schools from eight countries. Each participating school can access items developed by its partners and use related item psychometrics to benchmark within each year of its respective medical programme. This model for continuous quality assurance involves benefits, limitations, costs and difficulties. The presentation highlights these factors in evaluating the benchmarking protocol.

 

The psychometrics of Personal Development Planning for General Practitioners
Keywords: Psychometrics, Personal development plans, revalidation
Authors: Roberts C, Cromarty I, Russell J
Institution: University of Sheffield

Summary: UK GPs have been using personal development plans (PDPs) for many years as a means of evidencing their continued professional development. Recently imposed arrangements require GPs to keep a PDP as part of a cycle of revalidation (made up of five annual appraisals with an accredited appraiser). The purposes of both types of PDP are summative but the stakes are very different, in that the aim of the former is to determine an allowance of money and the latter to establish a practitioner's fitness to practice. To reassure UK stakeholders that revalidated GPs are fit to practice, then the reliability and validity of PDPs in this context needs to be established. There is currently little data reporting the measurement characteristics of PDPs. This paper describes the assessment characteristics of PDPs which were undertaken within one UK Deanery prior to the introduction of the appraisal process. Thirty-four volunteer GPs submitted their PDPs and 31 assessors used a prepared blueprint to mark the PDP. A variance components procedure estimated the contribution of each random effect (e.g. judges), to the variance of the overall PDP mark. Generalisability coefficients were derived from the analysis of variance components and a decision study conducted to model coefficients for differing number of judges and sections to the portfolio. The number of judges required to give a reliable assessment of the PDP fit for summative purposes was established. Further research is required to see whether this methodology can be applied to PDPs within the UK GP appraisal process.

 

Formative assessment of family doctors on vocational training programs
Keywords: educational measurement, clinical competence, formative assessment, vocational training
Authors: Josep Maria Fornells, Mati Ezquerra, Magda Bundó, Dolors Forés, Amando Martín Zurro,
Josep Maria Martinez-Carretero
Institution: Teaching Units of Family Medicine Residency Programme of Catalonia / Institute of Health Studies

Summary: Background: Sumative and formative assessment should be part of any educational activity. Formative assessment enables to show the specific competency progress of residents in relation with defined educational objectives in order to identify improvement elements needed in residency programme

Purpose: to initiate a formative assessment strategy for family medicine residency program in Catalonia

Methodology: Formative assessment methodology includes two different tools: Competence progress analysis sessions (CPAS) and resident-tutor feed back sessions. Each CPAS lasts 1 hour and participants are residents and tutors in primary health care centres. Educational objectives, competences assessed and tools used (SP, computed based cases, mannequins, self audit., and so on) are different according to residency year. There are two CPAS sessions per year of residency. Three tutor resident feedback sessions are implemented every residency year in order to analyse competence progress of resident and establish an educational plan and improvement actions. Responsible for residency programme is informed.

Results: Up to now, three resident promotions, 600 residents, are involved in such a programme. It started in 2001 promotion and 4 CPAS have been already conducted, promotion 2002, 3 CPAS and 1 CPAS in 2003. Strengthens of this project are involvement in resident educational process and weaknesses are lack of time and training in these formative methodologies. Participants collaboration is crucial to overcome such problems.

Conclusions: Formative assessment of family medicine residents has been fully and satisfactorily implemented in Catalonia.

 

Opinions of faculties about the efficiency of student ratings on teacher performance in Iran University of Medical Sciences during 1999-2000
Keywords: Evaluation, student, faculty, teaching
Authors: Sarchami, R.-Salmanzadeh, H.
Institution: Qazvin University of Medical Sciences

Summary: Opinions of faculties about the efficiency of student ratings on teacher performance in Iran University of Medical Sciences during 1999-2000 Ramin Sarchami - Hossein Salmanzadeh

Background: Continuous evaluation of their performance if is done correctly can help in distinguishing the week points and improving their function.

Objective: This is a descriptive - analytic research that is done to assess the opinions of faculties about the efficiency of student ratings on teacher performance in Iran University of Medical Sciences during 1999-2000. Method and Materials: Materials used were two questionnaires to assess opinions of educational field managers and faculties about the efficiency of student ratings on faculties' performance.

Findings: Results indicated that majority of faculties (61/9%) reported a low rate of change in their performance, and managers (65/8%) reported a low rate of change in the performance of faculties in their group.

Conclusion: with regard to the importance of faculty evaluation, it seems that the evaluation process should be performed with a more professional approach and with more contribution of faculties.

 

Improving scoring outcomes in a national high-stakes pharmacy OSCE
Keywords: Scoring High-Stakes OSCE
Authors: C. O'Byrne1, J. Pugsley1, L.J. Quero Muñoz2
Institution: Pharmacy Examining Board of Canada

Summary: Improving scoring outcomes in a national high-stakes pharmacy OSCE C. O'Byrne1, J. Pugsley1, L.J. Quero Muñoz3 Pharmacy Examining Board of Canada1, Inside Testing (Psychometric Consultant)3 The Pharmacy Examining Board of Canada (PEBC) and the College of Pharmacists of British Columbia jointly developed, field tested, and researched the utility of an "objective structured clinical examination" (OSCE) for assessing entry-to-practice and continuing professional competency of pharmacists. Field test research showed that two different assessors in the same location independently provide checklist scores and global scales that are very similar. Since that time, PEBC has implemented a 15-station OSCE in 11 different sites, involving multiple tracks in most of these sites. PEBC conducted post-implementation quality assurance research using videotaped candidate performances to further explore potential sources of error due to inconsistencies within and between sites in assessors' analytical (checklist) and holistic scoring. In this study, previously trained pharmacist assessors scored 5 videotaped candidate performances for candidates that they had assessed during the examination, as well as three sets of 5 candidate performances from other tracks and sites. Findings from the application of generalizability theory in the analysis of sources of error in these multiple measures will be presented, along with implications and recommendations for examination development, assessor training and future research.

 

Improving case presentation and outcomes in a national high-stakes pharmacy OSCE
Keywords: Standardized Patients, High-stakes, OSCE
Authors: O'Byrne, C.C. O'Byrne1, J. Pugsley1, Cathy Smith2, L.J. Quero Muñoz3 Pharmacy Examining Board of Canada1, University of Toronto2, Inside Testing (Psychometric Consultant)3
Institution: Pharmacy Examining Board of Canada

Summary: Improving case presentation and outcomes in a national high-stakes pharmacy OSCE C. O'Byrne1, J. Pugsley1, Cathy Smith2, L.J. Quero Muñoz3 Pharmacy Examining Board of Canada1, University of Toronto2, Inside Testing (Psychometric Consultant)3 The Pharmacy Examining Board of Canada (PEBC) and the College of Pharmacists of British Columbia jointly developed, field tested, and researched the utility of an "objective structured clinical examination" (OSCE) for assessing entry-to-practice and continuing professional competency of pharmacists. Research conducted during the field test supported others' findings that stations or tasks contribute most to score variance, followed by persons and raters. The nature and impact of differential SP training and performance has not been investigated. PEBC recently conducted post-implementation quality assurance research using videotaped candidate performances to further explore error due to inconsistent case presentation within and between sites. In this study multiple Standardized Patient Trainers reviewed three candidates' videotaped interactions from one track in their own site and from other tracks and sites, to evaluate similarities and differences from candidate to candidate, within and between tracks and sites, in the consistency of: 1. SP portrayal of the script and affect and 2. the presentation of the station materials. These SP trainers were able to replay the scenarios repeatedly, in any order, comparing portrayals and presentations within and between sites, and completed an evaluation survey. Findings will be presented, along with implications and recommendations for examination development, SP training and future research.

 

Competence standard setting: improved new method for high stakes OSCEs
Keywords: Standard-setting, High-stakes, OSCE
Authors: O'Byrne, C.C. O'Byrne1, J. Pugsley1, L.J. Quero Muñoz2, Pharmacy Examining Board of Canada1, Psychometric Consultant2
Institution: Pharmacy Examining Board of Canada

Summary: Competence standard setting: i9mproved new method for high stakes OSCEs. C. O'Byrne1, J. Pugsley1, L.J. Quero Muñoz2, Pharmacy Examining Board of Canada1, Psychometric Consultant2. The Pharmacy Examining Board of Canada (PEBC) has developed a new standard setting method to set the passing score for high stakes OSCEs. Because certification for licensure decisions are based on the exam, validity of decisions based on passing scores is critical. On six occasions, ten to twelve pharmacists representing all domains of pharmacy practice across Canada were convened for two-day panels during which passing standards were set for each of six 15-station OSCE forms. All panelists had been examiners and, during standard setting, viewed sample videotaped candidate performances and examination data. Judgments were made about the content of each station and about hypothetical borderline qualified candidates' performance in each station. Panelists scored the hypothetical candidates' performance using the same rubrics and scales as in the exam. The mean of these scores was the examination passing score. Data analyses focused on sources of internal validity, particularly the consistency of the standard setting results. Generalizability and dependability studies determined the proportion of variance contributed by: panelists, stations and forms difficulty, and station x panelists interactions.

Results indicated that (1) the passing score variance components between panelists within form were small; (2) viewing sample videotaped performances and scoring the hypothetical candidate on the same scales as in the examination enhances the validity of the results. In conclusion, the standard setting procedure yields consistent, dependable, valid results.

 

Evaluation of a Global Concept of Professional Competence: The OIIQ Experience
Keywords: Professional competence, Clinical judgement, Competence assessment
Authors: Louise-Marie Lessard, R.N., Ph.D. and Judith Leprohon, R.N., Ph.D. Carlos Brailovsky, MD, MA (Ed)
and François Miller, M (Ed).
Institution: Scientific Department, Ordre des infirmières et infirmiers du Québec, Montreal, Quebec, Canada Centre d'évaluation des sciences de la santé de l'Université Laval, Canada

Summary: The Ordre des infirmières et infirmiers du Québec (OIIQ) views professional competence as a global concept and has adopted a new professional examination which reflects this vision. This presentation will describe a model of professional competence developed by the OIIQ, the Mosaic of Nurses' Clinical Competencies, and discuss how the OIIQ's professional exam, with its written and practical sections, can best evaluate this construct of professional competence. The Mosaic of Nurses' Clinical Competencies represents clinical competence as a cube whose three axes correspond to the interacting components (functional, professional and contextual) of the examination's framework. The macro-competencies are generic, and their content is defined in the interaction of the three components, through integration of knowledge. Developed in collaboration with the Centre d'évaluation des sciences de la santé de l'Université Laval (CESSUL), the OIIQ's new professional examination is made up of two complementary instruments, both based on the key-feature approach. It includes a written exam of 100 short answer open-ended questions, and an objective structured clinical examination (OSCE) composed of 16 clinical cases. These two complementary instruments allow for the evaluation of clinical judgement as it emerges from the interaction of the three components of the mosaic while responding to a sample of typical clinical situations that nurses may encounter upon entry to practice. This type of examination also takes into account the multidimensional and multivariate characteristics related to the construct of professional competence represented by the mosaic.

 

Giving and getting information… A comparison of scores awarded for different consulting tasks at an interactive high stakes general practice (GP) examination
Keywords: information communication assessment clinical
Authors: Wiskin CM, Burn S and Barry K
Institution: University of Birmingham

Summary: Introduction: This study considers the relationship between the clinical content of a scenario and the scores awarded to qualifying students for communication and consultation skills. The hypotheses were that the study cohort would score better on some consulting tasks than on others, and that a relationship might emerge between communication skills scores and the clinical task specified, in particular relating to giving information.

Method VOICEs is a long-station interactive final examination in GP. Two of the six stations are simulated consultations. Candidates are marked by observing GP examiners, who collaborate with professional role players for communication skills scoring. Checklists here have been found to ineffective in reflecting students' professionalism and attitude in a meaningful way, and so were replaced in 1997 with descriptive bandings. Scoring data for 372 consultations were collected and analysed, with particular emphasis on score comparisons for different types of consulting task.

Results: Results from the pilot study showed poorer student performance in giving information and explaining risk than in demonstrating surface skills and negotiating with patients. Extended results will be considered in the context of the clinical subject being examined, and the nature of the information giving (or getting) task that was specified.

Discussion: Can communication and consulting skills be assessed independently of each other? To what degree is student consulting performance influenced by the clinical task?

 

Clinical training assessment in competence evaluation
Keywords: clinical trainning, competence evaluation
Authors: Ferré R., Jammoul A., Castro A., Vidal F*., Masana Ll.
Institution: Servei de Medicina Interna. Hospital Universitari Sant Joan. *Hospital Universitari Joan XXIII. Facultat de Medicina i Ciències de la Salut. Universitat Rovira i Virgili. Reus. Tarragona. Spain.

Summary: Clinical training brings together both theoretic knowledge and deductive skills as well as the experimentation of an immediate result. Our new curriculum allows for a 60% of practical credits. We pretend to evaluate clinical practice based in Case History, student's self-motivation, relationship and respect towards the patient and the ability to work as part of a team. The process runs parallel to the acquisition of theoretic knowledge and it's necessary to pass the theory exam to get the final qualification (FQ). The clinical practice qualification (CPQ) is evaluated by a physician and is composed by three elements: A Case History (35% value), acquired abilities (35%) and Students' implication in labour evaluation (30%). Final qualification is expressed as: Theory exam (70%) plus Clinical Practice (30%). Results of 2-year experience over 226 qualifications are discussed. Mean CPQ was 8 (SD 0.7, rank 5.7-9.4) and for FQ was 6.5 (SD 1.4, rank 3.1-10). The new method of evaluation results in a final score 0.4 points higher on FQ. There is a significant inverse correlation (r: 0.89, p< 0.0001) between the score of CPQ and FQ, implying that CPQ benefits more those students with lower final score. Also CPQ have a low variability than theoretical qualification. The worse theory qualifications can improve the FQ in about 25% and the best qualifications can diminish in about 5%. There is no significant correlation between theory and CPQ. We think Clinical practice assessment may be evaluated as a complement of the student clinical competence evaluation.

 

Computer based evaluation in obstetric and gynaecology
Keywords: computer evaluation pregraduate students
Authors: Chung C, Navarrete L, Salamanca A, Segura T, Diez JL y 1Peinado JM.
Institution: Department of Obstetric and Gynaecology. 1Department of Biochemistry. Medical Education Unit. Faculty of Medicine. University of Granada, Spain.

Summary: In the present study the clinical training of pregraduate students in Obstetric and Gynaecology was evaluated using a computer based method. The computer clinical simulation allows the evaluation of some clinical skills in a high number of students simultaneously with validity, trustworthy and feasibility. The evaluation consisted in the computer presentation of 8 clinical cases, including an interview video recorded with a simulated patient and different set of images. The test was presented in Power Point format. A total of 220 students were evaluated distributed in tree groups. Each question was answer consecutively after each computer presentation in a previously fixed time. The total length of the test was 45 minutes. The maximum mark was 50 points. The student average marks were 32,45 with a range between 17 and 45 points. The evaluation was passed by 85,45% of the students. The results presentation will analyze the relation with gender, clinical attendance, clinical skills showed in the ward and personal satisfaction with the clinical training period. The results obtained in the present study show that the computer based evaluation is a valid method in to asses cognitive skills, problem solving and clinical reasoning. However psychomotor and communicative skills can not be evaluated. This method overcomes the classical multiple choice test, being possible the evaluation of some kind of clinical skills in a large number of students in a cheaper and shorter way than an OSCE.

1. Metz J, Vleuten Cvd y Jacobs A. Evaluación de las habilidades. En Recursos para profesores en la enseñanza de las habilidades médicas. J Metz, M Patricio, JM Peinado y P Szekeres. Tempus 1999, 107-138.

2. Robbins G y Chalmers J. Instrucción asistida por ordenador. En: La docencia en Medicina. K Cox y C Ewan. Ed. Doyma 1990, 273-280.

 

Implementation of a new curriculum and assessment system and its consequential validity
Keywords: curriculum, assessment, consequential validity
Authors: Verheggen, M.; Romme, L.; Schuwirth L.
Institution: University Maastricht

Summary: When implementing changes to a curriculum and its assessment system, these changes can be expected to influence student learning behaviour (1,2). This is referred to as consequential validity. The specific behaviour induced by consequential validity is not always predictable. Sometimes students exhibit strategic behaviour that is completely unexpected. Generally, the best way to induce a desired learning behaviour is to match optimally content of the curriculum and content of the test. At the faculty of medicine at the University of Maastricht a major reform of curriculum and assessment system has taken place. Especially the module based assessment was changed from a true/false end-of-module test to a combination of assignments and a short case-based end-of-module test. The purpose of the present study was to gain insight into the consequential validity of this assessment approach. Sixty first and sixty second year students were invited to complete a questionnaire on the match between curriculum and assessment, and on their study behaviour. The main concerns raised by the students were that there often was a mismatch between content of the module and the test content. Despite the case-based testing approach students still perceived the test as too much factual knowledge-orientated. The assignments were rated more positively. Students are well knowledgeable about the rules and regulations concerning assessment and resits. They indicate a clear difference between the most strategic and the ideal study approach. Motives to adopt an ideal study approach in stead of the most strategic are based on idealistic considerations.

References:

1. Frederiksen N. The real test bias: Influences of testing on teaching and learning. American Psychologist 1984;39(3):193-202.

2. Newble DI, Jaeger K. The effect of assessments and examinations on the learning of medical students. Medical Education 1983;17:165-171.

 

Reviewing of clinical exposure and feedback provided to medical students during in-hospital rotations
Keywords: clinical clerkship, clinical exposure, undergraduate education, feedback
Authors: Weinreb, B.
Institution: Faculty of Health Sciences, Ben Gurion Univ., Israel

Summary: Reviewing of clinical exposure and feedback provided to medical students during in-hospital rotations. B. Weinreb, M. Matar, D. Kysos Among significant changes that took place during last decade in the medical care – one of the major is the transfer of much of the diagnostic and management acts to the primary care settings. With the continuation of the clinical teaching being based onb in-hospitals, medical students might nowadays complete their clinical clerkships without being exposed to a wide range of medical problems. Very rarely students are observed and provided with feedback regarding one of the most important aspects of the clinical teaching – the complete process of a patient admission. The aims of the study:

1. Screening the medical conditions students are exposed to, during their clinical clerkships.

2. Screening the amount of observation and feedback provided to medical students.

3. Comparing the clinical exposure that actually takes place during the clerkships with the syllabus.

4. Suggesting methods of remedial education, if needed, in order to ensure exposure to all aspects of the syllabus.

Methods: A questionnaire was distributed to medical students at the Medical School - Faculty of Health Science at the Ben Gurion University during their clinical clerkships. Heads of departments, tutors and students were presented with the aims of the study prior to the beginning of the clerkship and their cooperation was requested. Students filled in a questionnaire for each admission they performed during the clerkship. Along with the results, methods of teaching remedial will be suggested, as needed.

 

Using standardized patients for assessing the impact of an educational intervention on in-office practice
Keywords: in-office assessment, performance assessment
Authors: Weinreb, B.
Institution: Faculty of Health Sciences, Ben Gurion Univ., Israel

Summary: Using standardized patients for assessing the impact of an educational intervention on in-office practice B. Weinreb, R. Litt, M. Erlichmann, D. Kysos. The Ben Gurion University School for Continuing Medical Education in Beer Sheva, Israel has, since 1996, annually provided a total of 28 CME courses for primary care physicians. The overall goal of this project was the use of objective evaluation to assess effect of an educational intervention on the behaviour of physicians participating in a continuing medical education program. The subject chosen was childhood asthma. The use of standardized patients to assess the performance of the physicians' has been documented as a validated tool. Parent/child pairs of standardized patients were used before and after two interactive teaching sessions, to evaluate their effect on physicians' performance. The visits were carried out by two standardized patient/parent pairs, comprising a 12 year old girl, with moderate asthma, and her "mother' or "father". Following the visit the "parent" completed a 32-item questionnaire about asthma, and five questions about communication skills. The educational intervention was made up of two sessions, each of 90 minutes' duration, with a six-week interval between the sessions. The post-intervention visit was carried out with the same standardized patients. Identical checklists were used.The results showed areas of deficiency identified prior to the educational intervention and the change following the focused teaching. The study offers data on the process of using standardized patients for assessing in-office practice and usage of the results for aligning teaching objectives focused on pre-identified deficient areas.

 

Assessing Clinical Competence in Emergency Medicine
Keywords: emergency medicine, trauma, airborne, recertification
Authors: Weinreb, B.
Institution: Faculty of Health Sciences, Ben Gurion Univ., Israel

Summary: Assessing Clinical Competence in Emergency Medicine B. Weinreb, M. Marmor, Y. Sagie, D. Schwartz, A. Mayo, M. Halbertal, E. Haldenberg, J. Or. The need for clinical competence assessment and re-certification is a constant need with all medical fields, but especially critical within Emergency Medicine. Care providers are required to a high level of performance, decisions making, manual skills and team working. The Rescue and Evacuation Airborne unit of the Israeli air force is a military unit providing emergency medical services to both military and civilian settings. Selecting the OSCE as the clinical competence assessment tool – a comprehensive and innovative method for conducting a CME process in emergency medicine was created. To our knowledge, this is the first project of this kind to be described. 20 content experts, including physicians, paramedics and medics were presented with a list of topics to be classified as "must/important/nice to have". The disagreement among experts' classification was almost null and all "must" and "important" topics were collapsed within 18 OSCE stations. A simulation center was built and an observers' workshop was conducted, in order to introduce OSCE naïve experts to the method. 3 pilots test were conducted and lessons from those were implemented. The final test lasted 6 days and assessed 118 examinees. Data will be presented including the "cook book" of establishing such an exam, logistics and psychometrics of the exam. The results were used for focusing the remedial teaching to the deficiencies identified per group and per each examinee.

 

The assessment of junior house doctors' clinical competencies: what are the opportunities for ward-based assessment?
Keywords: Assessment; Competence; Junior doctors
Authors: Higgins, R.
Institution: LNR Postgraduate Deanery

Summary: In 2005, changes to UK doctors' training will see a two-year foundation programme replacing the current Pre-Registration House Officer (PRHO) year and the first year at Senior House Officer level (DoH, 2003). Programmes for the new training grades are being piloted. One focus of these pilots will be to ensure that clinical competencies can be assessed robustly and fairly to make certain doctors are 'fit-for-purpose'. While national guidelines on assessment will be provided, their implementation and impact needs to be evaluated. Changes in shift patterns of work have led to comments that consultants no longer know their junior house officers and feel ill-equipped to assess their competence. At the Leicestershire, Northamptonshire and Rutland Postgraduate Deanery, the foundation pilots will commence in August 2004. Systems and processes for assessment need to be developed in light of evidence-based models of good practice. However, there is a paucity of research on assessment of PRHOs. Little is known about formal and informal opportunities for ward-based assessment of clinical competencies. This paper presents initial findings from a study of work-based assessment opportunities for PRHOs. A number of PRHOs are being 'tracked' across the year, through observations and recording of work-related interactions. The aim is to identify which healthcare personnel are best placed to assess junior doctors. Early indications are that a variety of healthcare workers (including senior nurses, radiographers and others) are well placed to undertake ward-based assessment of specific competencies. Implications for the assessment of junior doctors and training of assessors will be discussed.

 

First OSCE in Uruguay: assessment of clinical skills
Keywords: OSCE, clinical skills, psychometric analysis
Authors: Gastón Garcés, MD; Alicia Gómez, MD; Martín Harguindeguy, MD and Enrique Macri MSC(Eng.)
Institution: Medical Education Department. Medical Department: "Clínica Médica C", Prof. Adriana Belloso.

Surgical Department: "Clínica Quirúrgica B", Prof. Carlos Gómez Fossati. Hospital de Clínicas, Facultad de Medicina. Universidad de la República Oriental del Uruguay.

Summary: A large-scale patient-based objective structured clinical examination (OSCE) was used for the first time in Uruguay at the University Hospital. The test was developed for first-year clerkship students (the fourth year of an eight-year long career). The traditional clinical exam which was used until now, was replaced by the OSCE, while the written exam and continuous tutor assessment remained without changes. 246 students, in 7 identical parallel tracks were evaluated. The exam is composed of 14 seven-minute clinical stations (4 history-taking, 4 physical examinations, and 6 situations including: radiographs, electrocardiograms, and vignettes asking for basic laboratory studies). History-taking and physical examinations include 15% of communication skills and 5-15% of organization skills. 56 medical tutors who acted as observers with standardized observation grids, 18 standardized patients and 16 collaborators for physical examinations were previously trained. The Cronbach alfa was 0,71, item-total correlation was >= 0.27 for 11 items (minimum= 0,27, maximum=0,53). Differences between tracks investigated using ANOVA with Tukey-Kramer post hoc test showed statistical little significant difference between two tracks. The cutting score of the 8 standard-patient based stations was the mean - 1 SD, and the other 6 were slightly modified by criterion. The minimum number of stations to succeed was 9, calculated from the rounding of mean-SD. The cutting global score was the mean of stations cutting scores.

References: Using Evidence to Improve Evaluation: A comprehensive psychometric Assessment of a SP-Based OSCE Licensing Examination. Carlos A. Brailovsky and Paul Grand´Maison. Advances in Health Science Education 5:207-219. 2000.

 

Method for resident performance assessment and evaluation
Keywords: evaluation, interpersonal and communication skills, practice-based learning,
Authors: Ferrer M, Fernandez M, Garcia-Velloso MJ, García N., Pueyo J, Rodriguez Paz JM, Carretero C, Palazuelos, J, Amillo S.
Institution:
Comision de Docencia, Clinica Universitaria, Universidad de Navarra

Summary: To assess and evaluate the outcomes of a Resident learning process is a difficult task. We wanted to design a tool to evaluate not only medical knowledge but also patient care, interpersonal and communication skills and professionalism. To achieve this in a practice-based learning we designed a "Resident file" primarily written with the aim to convert it electronically to allow each resident to fill up the items at the same time they perform each task. The purpose of the file is to offer an evaluation tool based upon the real activity and achievements. We first agreed the goals of each program and individualized the objectives. The file contains six sections: 1. General information: which contains all what a new Resident would need to start working. 2. Program director activities, containing the interviews, problem resolution, ethical problem solutions and value acquisition. 3. Clinical, Surgical, and Teaching Activities, including sessions delivered and received. 4. CV. 5. Papers published, rotations' reports, institutional participation. And finally, 6. Evaluation Activities, which include written test, performed, rotations' evaluations, Program director final evaluation. A follow up schedule is also programmed allowing weekly or monthly fill up time charts. This file could be an effective method to assess resident performance throughout the program and for utilizing the results to improve resident performance. And it would serve also as a regular and timely performance feedback to residents that include written evaluation and maintain an accessible record of evaluation.

 

Doing the station or writing the check list – comparison of two OSCE methods
Keywords: OSCE, assessment methods, written station
Authors: Weinreb, B.
Institution: Faculty of Health Sciences, Ben Gurion Univ., Israel

Summary: Doing the station or writing the check list – comparison of two OSCE methods B. Weinreb, R. Sternlieb, O. Riven-Leibovitch. The OSCE has been used now for many years and a large amount of data has been published on different aspects of the tool. Among disadvantages described in previous publications – the complicated logistics and the cost are among the unresolved issues. The aim of this study was to compare the psychometrics of "regular" OSCE vs a "written" OSCE offered to nurse students. We assumed that practical performance in OSCE stations may elicit similar results as compared with having to write the check-list for identical topics. 1st year nursing students took their final exam, which is usually an OSCE type examination based on 10 stations. On the exam day, the class was randomly divided into two halves, with half of the class taking the regular OSCE format (simulation station assessed by on observer filling in the check list) while the other half were presented with the same topics included in the OSCE stations and requested to write a check list for each station. Comparison of the two methods revealed that specific topics should be assessed by practical OSCE stations whilst other competencies can be assessed by written OSCE stations. The findings can provide partial solution to the high cost and logistics complexity of the OSCE.

 

Deep knowledge structure is associated with increased odds of diagnostic success in novices
Keywords: Knowledge structure
Authors: Kevin McLaughlin, Sylvain Coderre, Garth Mortis, Henry Mandin.
Institution: University of Calgary

Summary: Background: Diagnostic reasoning involves applying stored knowledge to a new problem. The relationship between knowledge structure and diagnostic success is unclear. Similarly, the determinants of knowledge structure are poorly understood. The objectives of this study were to identify variables associated with knowledge structure and diagnostic success in novices.

Method: This was a cross-sectional study of novices in four clinical presentations: hyponatremia; hyperkalemia; metabolic acidosis; and metabolic alkalosis. The dependent variables were knowledge structure type (deep vs. surface), determined by concept sorting, and diagnostic success. Explanatory variables were gathered using a questionnaire. Data were analyzed using multiple logistic regression.

Results: Thirty first-year medical students participated. Scheme use by small group preceptors and male sex were associated with increased odds of deep knowledge structure (OR 1.63 [1.14, 2.32], P = 0.007 and 3.04 [1.99, 4.63], P< 0.001, respectively) while the domain of hyperkalemia was associated with reduced odds of deep knowledge structure (OR 0.51 [0.34, 0.76], P = 0.001). Deep knowledge structure was associated with increased odds of diagnostic success (OR 1.53 [1.04, 2.25], P = 0.03) while the domain of hyperkalemia was associated with reduced odds of diagnostic success (OR 0.48 [0.31, 0.73], P < 0.001).

Conclusions: In novices, knowledge structure varies between domains. Scheme use by small group preceptors and male sex is independently associated with increased odds of deep knowledge structure. Diagnostic success also varies between domains and deep knowledge structure in any given domain is associated with increased odds of diagnostic success in that domain.

 

Can Standardized Patients Replace Physicians as OSCE Examiners?
Keywords: OSCE; evaluation
Authors: Laura Gregor, Sylvain Coderre, Allan Jones, Kevin McLaughlin
Institution: University of Calgary

Summary: Background: To reduce inter-rater variability and demand on physician time, standardized patients (SP) are being used as examiners in OSCEs. SPs may not, however, have sufficient training to provide a valid evaluation competence and/or provide feedback on clinical skills. The objectives of this study were to: examine student attitudes towards SP examiners; compare SP and physician evaluations of competence; compare predictive validity of these scores, using performance on the summative multiple choice questions examination (MCQE) as the outcome variable.

Methods: This was a cross-sectional study of third-year medical students undergoing an OSCE at the midpoint of their Internal Medicine clerkship rotation. Student attitudes towards SP examiners were evaluated using a questionnaire. Student scores for the OSCE and summative MCQE examinations were collected.

Results: Fifty-two students rotated through 8 OSCE stations (6 physician, 2 SP examiners). Most students reported that SP stations were less stressful, SPs were as good as physicians at feedback, and SPs were sufficiently trained to judge examination skills. SPs scored students higher than physicians (mean (±SD) 90.4% +/- 8.9 vs. 82.2% +/- 3.7, p<0.001). SPs and physicians correlated weakly (coefficient 0.4, p=0.003). Physician scores were predictive of MCQE scores (regression coefficient = 0.88 [0.15, 1.61], P = 0.019) but there was no relationship between SP scores and MCQE scores (regression coefficient = -0.23, P = 0.133).

Conclusions: SP examiners are acceptable to medical students, SPs rate students higher than physicians and, unlike physician scores, SP scores are not related to other measures of competence.

 

The borderline group method for making pass/fail decisions in OSCE: Exploring the stability in the Medical Council of Canada Part II examination
Keywords: OSCE, pass/fail decision, licensure
Authors: Birtwhistle, R., Wood, T., Smee S. M., Blackmore D.E.
Institution: Medical Council of Canada

Summary: The Medical Council of Canada (MCC) has been using the borderline group method for making pass/fail decisions for its licensure objective structured clinical examination (OSCE) since 1994. 1 This OSCE is known as the MCC Qualifying Examination Part II (MCCQE Part II) and leads to the Licentiate of the MCC (LMCC), which is used as a prerequisite to licensure in Canada. The MCCQE Part II is administered in Canada to about 2300 candidates per year. The borderline group method asks the experienced clinician examiners who score the OSCE to make a global decision about each candidate for each station using 6 categories: Excellent, Very Good, Borderline Satisfactory, Borderline Unsatisfactory, Poor and Inferior. The passing score for each station is the mean score of the candidates considered borderline satisfactory or borderline unsatisfactory. The passing score for the examination is a sum of these scores plus 1 standard error of measure. This study examines the stability of the passing standard for this 14-station OSCE based on the past five years of data (1998-2003). Changes in the proportion of candidates assessed as borderline and proportion of borderline satisfactory vs. borderline unsatisfactory will be compared, as will standards based on first time test takers versus all test takers and Canadian trainees versus all test takers. We will also assess the stability of the standard for the OSCE over this time period and discuss the validity of this approach for setting a pass/fail standard on a large-scale clinical performance examination.

1. Using the Judgments of Physician Examiners in Setting the Standards for a National Multi-center High Stakes OSCE, Dauphinee WD, Blackmore DE, Smee SM, Rothman AI and Reznick R, Advances in Health Sciences Education 2: 201-211, 1997.

 

Psychometric Characteristics and Response Times of One-Best-Answer Items in Relation to Number and Source of Options
Keywords: MCQ licensing exam
Authors: Kathleen Z. Holtzman, David B. Swanson, Brian E. Clauser, Amy J. Sawhill, and Douglas R. Ripkey
Institution: National Board of Medical Examiners

Summary: Purpose: This study investigated the impact of the number and source of options on psychometric characteristics (p-values and biserials) and response times for multiple-choice questions (MCQs) appearing on Step 2 of the United States Medical Licensing Examination (USMLE). Instrumentation. 90 sets of MCQs (260 total items, all in the patient vignette format) were used; numbers of options in original versions of items ranged from 11 to 25. Statistical information was available from prior use for 40 sets; the other 50 sets had no prior use. For used and unused MCQs, a USMLE Step 2 item-writing committee reviewed the original option list and selected the five options viewed as most appropriate; no statistical information guided selection. For the 40 used items, two NBME staff reviewed the percentage of high-and low-scoring examinees selecting each option and created 5- and 8-option versions of items designed to maximize item discrimination.

Procedure: Study items were embedded in unscored slots of the computer-based Step 2 in a fashion that ensured no examinee would see more than one version of an item. Response times and item responses for first-time examinees (roughly 400/item) were used in analysis, which consisted of calculating item difficulty (p-value), discrimination (biserial correlation with total scores), and response time in relation to number and source of options.

Results. The table below summarizes results. For both used and new items, as the number of options increased, items became more difficult (p < 0.001) and mean response times increased (p < 0.001), but item discriminations were unaffected.

Conclusion: Because use of larger numbers of options requires more testing time without increasing item discrimination, tests using smaller numbers of options should be somewhat more reliable per unit of testing time.

 

Application of a logbook for clinicaltraining at the Granada medical school
Keywords: Logbook, clinical skills
Authors: Campoy, C.
Institution: School of Medicine. University of Granada

Summary: The aim of the present study was to develop a logbook for clinical training and to analyze the effect of this support on the training and evaluation procedure. Each skill was included by consensus among clinicians in a list organized by subjects, with three levels each, (seen, practice, routine). At the end of the clerkship each student was evaluated using the logbook. The tutors were also evaluated by students. The results show that 81% of the students used the logbook during the training period. However, all of them consider very useful the logbook, as a method not only for their evaluation, but also for tutors evaluation and to control which clinical skills have to be learned in each area; the logbook supposed also a good documental guidelines for both, students and tutors, and they considered very good (56%) or good (44%) the definition assessment criteria established. The educational value of the training after the introduction of the changes was considered very high (70%) or high (10%). The use of the logbook gave high motivation to the tutors and stimulates the use of different teaching techniques. Supported by the Leonardo da Vinci EU agency. Project: Mandatory training period: guidelines for a new approach.

 

Lost in relation? – A formative evaluation strategy to improve the quality of peer-teaching and peer-assessment in an undergraduate basic clinical skills course by contrasting performance exam group results
Keywords: formative evaluation, peer-teaching, peer-assessment, skills training
Authors: Schmidts, M.; Link, Th.
Institution: Institute for medical education Medical University of Vienna

Summary: At the medical university of Vienna (~600 graduates/ year) we rely on peer-teaching and peer-assessment to ensure supervised small-group training and outcome control. To monitor the quality of our basic clinical skills course we perform peer-observed 3-station "mini"-OSCEs. In addition to formative and summative student feedback, we interpret OSCE group-results to reflect the global course outcome (administrator feedback), the different outcomes of peer-training subgroups (trainer-feedback) or the different rating behavior of peer-observers (observer feedback). Figure 1 shows a electronically generated trainer-feedback-report, that our peers receive following an OSCE*. It summarizes the item-marks of 5 students trained by peer-A and observed by peer-B at the station "blood pressure measurement" (in comparison to the overall station outcome, n=71). Items are color-coded according to their percentage of group-accomplishment, and "critical" items are highlighted yellow or red. A highlighted item might indicate that 1) students performed weaker than average, or 2) the group was trained below average by peer-A, or 3) peer-A taught another standard than peer-B assessed and/or 4) peer-B judged using another standard than his colleagues. Our peer-trainers/observers are required to reflect the feedback reports and to discuss the highlighted items with their corresponding colleagues. This contrasting strategy enables us

- to make our course outcomes more transparent,
- to diminish discrepancies in training standards,
- to increase interrater-reliability and
- to improve training quality.

In addition to peer-training/assessment, this approach might be generally applicable for settings with expert trainers/observers or with SPs as raters.

Figure 1:

Automatically produced peer-trainer feedback (name of trainer and observer removed) for the station "blood pressure measurement". Problematic items are highlighted yellow or red.

* Schmidts M. (2000) OSCE Logistics – Handheld Computers Replace Checklists and Provide Automated Feedback, Medical Education, 34 , 957-8

 

Development of a Rating Scale to Assess Medical Error Disclosure and a Comparison of its Psychometric Properties in Two Different Communication Media
Keywords: medical error disclosure, videoconferencing, standardized patients
Authors: David K. Chan,1 Arthur I. Rothman,1 Thomas H. Gallagher,2 Richard Reznick,1
and Wendy Levinson 1
Institution: 1 University of Toronto, Toronto, Ontario, Canada 2 University of Washington, Seattle, Washington, USA

Summary: Purpose: To describe the development of a rating scale specific to medical error disclosure, and compare its psychometric properties between face-to-face and videoconferenced standardized patient (SP) assessment.

Method: Surgeons' error disclosure skills were assessed using the rating scale and three SP scenarios. Surgeon-SP encounters were conducted in a conventional face-to-face medium (Toronto surgeons) and over videoconferencing (St. Louis surgeons). Internal consistency of items, inter-rater reliability, inter-case reliability, and the surgeons' performance were compared between the two media.

Results: The psychometric properties of the rating scale were comparable between media, with acceptable internal consistency and inter-rater reliability. There was no significant difference in the aggregate scores of the surgeons' performance between media.

Conclusions: A rating scale specific to error disclosure was developed with acceptable psychometric properties over two different communication media. Delivering educational programs and assessing communication skills over videoconferencing is feasible and holds promise for future efforts.

 

Neonatology OSCE: certification of an expertise.
Keywords: Neonatology, OSCE, Certification
Authors: Arnau J, Esqué T, Zuasnabar A, Fina A, Moral A, Raspall F, Barragán N, Martínez-Carretero JM.
Institution: Institut d'Estudis de la Salut

Summary: The Neonatology's Group of the Catalan Paediatrics Society and the Institute of Health Studies have conducted 3 OSCE examinations in the last three years (2001-2003). A number of 48 professionals have been evaluated by means of this assessment tool. The Neonatology OSCE is made up by a multiple-station examination, with 13 cases distributed in 21 stations. The length of each station is 10 minutes, with 2 minutes in-between. The OSCE are conducted in the outpatient clinics of a Barcelona university hospital. Neonatology in Spain is not yet a medical speciality. For this reason, a professional competence certification for that particular expertise must be developed. Moreover, the Catalan Public Health System is quite interested in assessing the competences of those professionals for specific job applications in the catalan public hospital network. At those three first OSCE editions, the mean global score at each edition was above 65%. According to the different assessed clinical objectives, for the 2003 edition, the highest scores were observed at preventative activities (73,8%) and technical skills (73,2%), and the lowest at history taking and physical examination (43,7%). The first 3 editions of Neonatology OSCE have proved their validity and feasibility and, overall, the high satisfaction expressed by those professionals who went through that certification tool.

 

Shift in a candidate's acceptability due to shifts in ability of the applicant pool
Keywords: admission, selection
Authors: Peter H. Harasym, Rod Crutcher, and Doug M. Lawson
Institution: University of Calgary

Summary: Background: Selecting the best applicants from a given pool of candidates for entry into an educational program is always difficult. When no absolute criteria are available, it is possible that a candidate's ranking could shift depending on the quality of the applicant pool.

Purpose: This study examined the shift in candidate acceptability into the Alberta International Medical Graduate (AIMG) program over 3 years.

Method: in 2002-2004, 8-20 candidates were selected using a four-step selection procedure: initial file review, OSCE, interview and then final selection. In the final selection, six judges rated each candidate's relative strength within each data sources and assigned an overall suitability rating using a 5-point scale. A 3-faceted (candidate, judge, and year) Rasch model placed the variables onto one scale to determine the equivalence of the candidate pool from year to year.

Data: 76 candidates applied for the AIMG program in 2002-2004. 11 of the candidates applied for entry into the program more than once. Also, 9 of 13 judges reviewed candidates' files in 2 or more years.

Results: Scree test provided evidence of unidimensionality. There were good infit and outfit statistics providing further evidence that the data fit the model. The reliability estimates were high (0.93-0.99). The Rasch modelling found significant variability in the relative ability of candidate pool by year of application that altered the final rank ordering of repeat candidates.

Conclusions: The 3-faceted Rasch modelling provided evidence of significant shift in candidate acceptability due to changes in the quality of the applicant pool.

 

"Trainer Academy": The first step toward standardization of a standardized patient examination
Keywords: SP Training
Authors: King, A.
Institution: National Board of Medical Examiners

Summary: The most basic requirement of a standardized patient (SP) examination is the consistency of the standardized patient's performance. As the number of standardized patient trainers and test administration sites expand, the challenge of achieving standardized patient consistency increases further. When the scope of a project requires multiple trainers, the development of standardized training protocols is critical to ensure consistency of standardized patient performance. The National Board of Medical Examiners (NBME), in collaboration with the Educational Commission for Foreign Medical Graduates (ECFMG), will implement a large-scale, multi-site standardized patient examination as part of the United States Medical Licensing Examination (USMLE) Step 2 Clinical Skills examination (Step 2 CS). This paper describes the protocols that were developed to train all standardized patient trainers involved in the administration of the Step 2 CS exam. The Trainer Academy is a five-day intensive session that gives each trainer experience with the training protocols and the training material. Graduates of the Trainer Academy are expected to train SPs in a consistent manner, thereby enhancing SP performance. The protocols outlined in this paper are relevant to all large-scale standardized patient programs.

 

Test construction to explore if pharmacological and therapeutic knowledge are applied to drug treatments
Keywords: Pharmacology, knowledge assessment, students, residents, physicians
Authors: Marín-Campos, Y.
Institution: Faculty of Medicine, National Autonomous University of México

Summary: Research has shown Basic Sciences knowledge as an important element for achieving competence in the practice of medicine; evidences suggest that experts are guided in their performances by basic concepts and principles within the subject matter of their fields. However, there are medicine schools and health care services who have reported problems with students, residents and physicians' training on the use of drugs. Despite this, there is a lack of studies on pharmacological knowledge and its application to drugs treatments. This study deals with students, residents and physicians' knowledge assessment to explore Basic Pharmacology concepts and principles recall and their application to both specific drugs actions and to determine drugs treatments in clinical cases. In order to assure the test validity, it was constructed based on published works, in which are described rules and procedures related to structure and content of questions and cases. The test development process was made based on the following stages: a) Item developers training; b) Test domain definition: the blueprint was designed in this stage; c) Test items development: it was used the extended-matching item format to explore factual and procedural knowledge. Participants were students, residents and physicians from different medicine schools and hospitals. A 65 multiple-choice items test was applied to explore: concepts and principles identification, application these knowledge to identify drugs actions, and cases in which drugs treatments should be prescribed. The results showed scores differences among the three expertise level subjects and correlations among items in which factual and procedural knowledge were explored.

 

Do attending physicians, nurses, and residents rate medical students differently? An inter-rater reliability study from the IIME Project in China
Keywords: inter-rater reliability, student performance
Authors: Moyer, C.A., Ni, C., Stern, D.T., Sippola, E., Wojtczak, A., Schwarz, M.R.
Institution: University of Michigan Medical School

Summary: Introduction: Outcome-based assessment of medical student performance is an integral part of evaluation, yet concerns about rater bias and reliability remain. In this study, the relative ratings of students at one school in the course of an international assessment project are analyzed to determine the relative ratings of nurses, residents, and attending physicians.

Methods: Attending physicians, nurses, and residents were asked to evaluate 69 Chinese medical students over a three-month period. Each evaluator used a 16-item checklist (1-5 point scale, 5 being highest) that was used to calculate subscale scores for professionalism (7 items) and communication skills (6 items). One-way ANOVAs compared overall mean scores on professionalism and communication skills given by each type of rater. Time intervals were also compared to determine if changes could be seen over time.

Results: Our results indicated significant differences across rater groups at p<.001 for both professionalism and communication skills subscales. Post-hoc pair-wise comparisons indicated that for professionalism, residents rate medical students significantly higher (4.69) than attending physicians (4.53, p=.015) and nurses (4.41, p<.001). The trend for communication skills was similar (residents 4.63, attending physicians 4.47, nurses 4.32, all p<0.05). Our results indicate no significant change in ratings over time.

Conclusions: Attending physicians, residents, and nurses provide significantly different perceptions of student performance. While no one of these could be considered a "gold standard" against which the others should be compared, the composite likely provides a more realistic perspective of how students perform.

 

Global Essential Competencies and Their Evaluation
Keywords: Global Essential Competencies, Assessment, International Standard-Setting
Authors: Wojtczak, A., Schwarz, M.R., Stern, D.T., Yao T., Wan, X.
Institution: Institute for International Medical Education

Summary: The Institute for International Medical Education (IIME) using a world network of experts defined Global Minimum Essential Requirements ("GMER"). They include sixty (60) learning-objectives grouped under seven (7) broad domains that define knowledge, skills, professional behavior and ethics that all graduates must possess regardless of where they are educated. The seven domains are: (1) Professional Values, Attitudes, Behavior and Ethics; (2) Scientific Foundation of Medicine; (3) Clinical Skills; (4) Communication Skills; (5) Population Health and Health Systems; (6) Management of Information; (7) Critical Thinking and Research. The IIME Task Force composed of international experts on assessment, identified evaluation tools to assess if graduates had acquired these competencies. In cooperation with eight (8) leading medical schools in China, the assessment tools were finalized and translated into Chinese. The examination consisting of 150 MCQ items, 15 OSCE/SP stations and Observations Ratings results was administered in October 2003 and overseen by IIME observers. Based on the results of the exam, the international competency standards at the student-levels were defined by the multidisciplinary international Standard-Setting Group, and the standards at the school-level were defined by the IIME Core Committee. The reports for the medical schools, participating students and the Ministers of Health and Education outline areas for improvement. This pilot implementation produced a group of trained Chinese medical educators capable of incorporating the "GMER" objectives into their curricula and to use international-quality assessment tools for student evaluation. This educational experiment indicates that it is possible to obtain agreement among international experts on a set of essential global competencies and tools to assess them, as a road to outcome-oriented medical education.

 

 

Assessment by medical residents of training received in the diferent hospital services: a monitoring tool
Keywords: Postgraduate medical education. Assessment. Quality training programme. Multiple Correspondence Analysis
Authors: Pijoan, J.I. 1, Moran, J.M. 1 , Urkaregui, A.2
Institution: 1 Research and medical education unit hospital de Cruces. Baracaldo. Spain 2 Department of applied mathematics, statistics and operational research. Universidad del País Vasco. Leioa. Spain

Summary: Objective: The Spanish postgraduate medical education system lacks a global assessment process. The potential role of the views of residents on the quality and characteristics of the training provided by hospital services in this process is explored.

Methods: Administration in general teaching hospital of a specific questionnaire* devoted to measure residents´ perceptions of the medical education received in the services they have been training in. Multivariable analyses through the use of Multiple Correspondence Analysis (MCA) and Automatic Classification (AC) methods.

Results: 253 (95%) responses obtained in 2003. MCA supports questionnaire validity. The following variables are associated with a better scoring of the service: 1) to know who is the service´s tutor 2) involvement in research activities 3) good agreement of the actual training received with the scheduled program 4) time spent as a resident (the more the higher the score) and 5) length of stay in the service (higher if it is longer than 3 months). AC finds five homogeneous groups according to the residents´ views: 1) Excellent (15%) in clinical, educational, ethical and research training and also about the tutor. The services had more frequent clinical sessions. 2) Adequate (41%) in all aspects 3) Poor (26%) in all aspects. 4) Very poor (3.6%) . These services had no clinical sessions. 5) Mixed group (13,4%) where the main feature is the non responses about the tutor.

Conclusions: The questionnaire provides valid and useful information for the monitoring of the postgraduate medical education system (fig 1). Important differences among hospital services are detected.

 

Key elements for successful clinical training
Keywords: clinical training, communication skills
Authors: Kauppinen, R.; Sjöblom, F.; Vitikka, A.
Institution: Department of Research and Development of Medical Education,University of Helsinki, Finland

Summary: Aim: To find key elements for successful clinical training. Background: After two-year clinical studies, medical students practice in surgical and internal medicine wards for four weeks. During the clinical courses they have been trained for small clinical procedures and written medical reports.

Summary of the work: Students (n=63) reported their actions into a logbook while training in local hospitals (n=13). Students and their instructors (n=170) evaluated their performances using a fixed rating scale 1-5 (1 insecure-5 very confident) in addition to open-ended feedback. Summary of the results: Every student performed clinical actions (n=1461) during their visit. They were confident with their performance (mean=3.71 10 items Cronbach's alpha 0.62) which was in accordance to their instructors findings (mean 4.1, Pearson's correlation 0,691 p < 0.000). In contrast, evaluation of medical reports showed that the students (mean=3.55) underestimated their performance compared to their instructors' assessment (mean= 4.31). Students appreciated well-organised meetings, clinical rounds, operations, and practice in an emergency room (mean=3.71, 20 items Cronbach's alpha 0.77). Instructors appreciated students' communication skills (mean=4.47 3 items Cronbach's alpha 0.74) which they have been trained previously (ECTS grades 7.5). Conclusion: Key elements for successful clinical training are good clinical guidance from experienced physicians, sufficient number of patient contacts and clinical actions. Practical training already during clinical courses provides good clinical skills for students, who are confident with their performance later in practice.

 

Professionalism in Medicine: Evaluation System of the Development of Professionalism Competencies in Students of Medicine
Keywords: Professionalism Assessment and Evaluation
Authors: Hernández, C.
Institution: Instituto Tecnológico y de Estudios Superiores de Monterrey

Summary: The Project Professionalism of the Monterrey Tech's School of Medicine implies the implementation of a competencies-based curriculum which emphasizes professionalism. One of its main objectives is to generate an evaluation system which assures the students' acquisition of all the competencies necessary for adequate performance as a medical professional. Through a collaborative effort of the Academic Committee of the School of Medicine, based on a thorough review of the literature and on the assessment of experts in the fields of academics and ethics, the design of such an evaluation system has been achieved. This complete and integral evaluation system will be implemented throughout the courses of the MD program. The 5 instruments of evaluation that comprise this system are: (1) Decision-Making Evaluating Instrument based on a simulated cases exam based on vignettes that permit an evaluation of the student's ability to identify, emphasize and apply the ethical principles in his professional decision making process; (2) a standardized feedback format based on the global rating of a live performance method which favors the detection of areas of opportunity as well as those with exemplary development of professionalism competencies (3) a moral judgement test standardized to evaluate the ability in decision making according to moral principles. (4) a tutoring system whose objective is the development of opportunity and excellence areas detected in the student; finally (5) the portfolio which assures the student's understanding of the personal development of competencies through written self-reflection. In this paper this integral evaluation system and its evaluating instruments that comprises it are presented expecting that through its application we can establish a model that provides the solution for evaluating professionalism competencies.

 

Catalan family medicine OSCE: the failing candidates
Keywords: OSCE, family physicians, pass/fail criteria, failing candidates.
Authors: Blay C, Vilatimó R, Arnau J, Vilaseca JM, López Sanmartín C, Juncosa S, Martínez-Carretero JM.
Institution: Institut d'Estudis de la Salut

Summary: The Catalan Society of Family Medicine and the Institute of Health Studies have jointly conducted 15 editions of the Family Medicine OSCE with certification purpose. A number of 439 family physicians has been assessed during the last 7 years (1997 – 2003). In those OSCE editions, participants were practising family physicians and some of them tutors of family and community medicine from teaching units of residency programmes. All the Family Medicine OSCE editions have proved its validity, reliability and feasibility and its good acceptability by candidates who went through this examination, that nowadays has a growing impact on professional career. As a certification assessment, pass-fail criteria have been established. Criteria are mainly based on global and per component scores. XX candidates (XX,X%) haven't attained pass-fail criteria and, therefore, they should represent the population at risk of insufficient level of practice. In order to confirm if failing candidates share an specific profile, their professional and demographic characteristics are described and also compared with passing examinees. Some associated factors as age, time of practice, teaching responsibilities and practice environment seem to be present.

 

Isralei Primary care Physicians competence Assessment- The PAMP project
Keywords: Physician Performance assessment, OSCE, Primary Care
Authors: Reis, S.
Institution: Technion

Summary: Aim: The PAMP (Physician Assessment in Medical Practice) is a study of CME Needs Assessment of Primary Care Physicians (PCPs) in Israel. PAMP1 validated an OSCE (10 two-visit stations of 25 minutes each) with Physician Observers (POs) and Standardized. Patient (SPs) evaluations (*Cohen et al., 2002). PAMP2 was a modification of PAMP1 aiming to increase feasibility and test new features.

Method: The modifications were (among others): reduction of number of stations and testing time, changing the post encounter probe to a Structured Oral Examination (SOE) for capturing clinical reasoning. 151 PCPs rotated through 8, 24-minute stations in 11 PAMP2 sessions in 2002. The sample consisted of 3 heterogeneous PCP groups. Multiple scores and scales were marked by the POs and SPs and evaluated in the analysis.

Results: POs scores as well as the SOE are also reliable and valid in PAMP2 for stations (content) and domains (skills). Standardized Patients (SP) scores, reliable and valid for PAMP1, are less so for PAMP2. Feasibility is enhanced by the time and expense saving in PAMP2, but recruitment remains a formidable task.

Conclusions: The PAMP2 is a formative, competence-based tool that serves a diagnostic post-screening purpose in CME and link assessment to learning. In spite of some limitations, it seems well suited for this purpose. In the future it may serve many purposes and be linked with practice process and outcome indirect data and in general serve as a basis for a comprehensive PPA for PCPs in Israel.

* Cohen, R., Amiel, G.E., Tann, M., Shechter, A., Weingarten, M., & Reis, S. (2002). Performance assessment of community-based physicians: evaluating the reliability and validity of a tool for determining CME needs. Acad Med. 77, 1247-1254. Erratum in: Acad Med. 2003: 78 ,417.

 

The situation Of Teachers evaluation at Kermanshah University Of Medical Sciences
Keywords: teachers evaluation,situation,educational activity
Authors: Sh Iranfar, B Izadi, M Iranfar.
Institution: Kermanshah University of Medical sciences

Summary: Considering the purpose of faculty evaluation is teaching activities improvement. This was carried out to determine the Situation of teachers' evaluation at Kermanshah University of Medical Sciences. This study was descriptive method. All results of completed questionnaire from faculty evaluation were collected in 3 periods. The variables included total score from Personal characterization, Teaching method and Academic abilities. The scale of these variables was good, median and Poor. The data were analysed by using descriptive statistic and X2 test to determine of the correlation between variables. The results of three periods of teachers evaluation showed that 78, 7% 80 , 5% and 83% of teachers respectively were men. The most and at least of good Sale was belonged to Personal characterization and teaching method of teachers (89.4%, 89.4%, 92.5% V.S 54.2%, 51.4%, 56.7%). There wasn't a significant difference between results of 3 periods. Inspite of the purpose of faculty evaluation, the current study emphasized that this method of evaluation has poor effectiveness on teaching activities improvement.It is recommended to study what kinds of problem prevent to link faculty evaluation with teaching activities improvement.

 

The teachers,communication skills and its relationship with teachers, evaluation
Keywords: communication skills, verball communication skills, non-verball communication skills, and evaluation.
Authors: SH Iranfar, F Azizi, N Valaee
Institution: Kermanshah University of Medical sciences

Summary: One of the main problems of universities and educational centers is the evaluation of teacher activities. The aims of the research were to determine the situation of the teacher communication skills, and the relationship between the teachers' communication skills and evaluation. A descriptive study was carried out on 385 students selected by random sampling for determining the teachers' communication skills, followed by an analytical study to find out the relationship between communication skills and evaluation.A questionnaire was designed to assess the teachers, communication skills. The students completed the communication questionnaires. After at least two weeks, the teachers, evaluation was done under the university rules. The students evaluated 60.4 percent of the teachers as suitable and 39.6 percent as unsuitable communicators. 51.9% of teachers had suitable educational activities. Male and female teachers had different educational activities (p>0.01.). The study showed that a higher percentage of students had satisfaction with teachers, communication skills and educational activities. It is also concluded that there is a relationship between the teachers, communication skills and evaluation. The verbal communication skills were more important than non-verbal communication skills in the evaluation. An experimental research is recommended to determine the effect of communication skills on evaluation.

 

Evaluation of communication skills in physicions, Shiraz, Iran, 1999
Keywords: Evaluation/communication skills /physicions
Authors: Rezaee,R. Hosseini,J. Valaee,N.
Institution: shiraz university of medical scienses, EDC center

Summary: This research has been done on general physicions and specialist at Shiraz city in Iran with the three basic objectives:

1)doctor's communication skills from the point of view of the patients. 2) doctor's communication skills from the point of view

of the researcher. 3)doctor's attitude about the teaching and application of communication skills. The results showed 12.1% of doctors, from the point of view of the patients, don't have appropriate communication skills and 60.1% have good communication skills. The expectations of communication skills became more with increase of age and their level of education. The communication skills of women doctors was better than men and the communication skills of general physicions was better than specialists.47.5%of doctors, from the point of view of the researcher, don't have appropriate communication skills the communication skills of women was better than men.65%of doctors had positive attitude toward the teaching and application of communication skills. This attitude has been the same with regards to gender and speciality, but the older and more experienced doctors pay more attention to communication skills.

 

Measurement of correlation between educational performance and verbal and nonverbal communication skills in Jahrom medical teachers
Keywords: communication skills teacher student
Authors: Amini, M.Najafipoor,Sedigheh
Institution: Jahrom medical university Jahrom, Iran

Summary: Measurement of correlation between educational performance and verbal and nonverbal communication skills in Jahrom medical teachers
Authors: 1-Dr Mitra Amini M.D, M.P.H 2-Sedigheh Najafipoor Jahrom Medical school

Introduction: The effective communication between teachers and their students is important point in teaching process. These communication skills determine teaching quality.

Material and methods:  This study was done on academic staff of Jahrom medical school.30 academic staff was selected. The data was collected in two steps. With performance of one questionnaire, the verbal and non verbal arts of teachers evaluated. With the other educational performance of teachers evaluated at second late week of term. Correlation between communication skills and educational performance is calculated with statistical tests

Results: the best grades was obtained by academic staff of community medicine department and the least grades was obtained by academic staff of physiology ward in evaluation of educational performance and verbal and nonverbal communication skills. There was statistically significant correlation between educational performance and verbal and nonverbal communication skills in all groups (p<0.05).

Conclusion: It seems necessary to educate our academic staffs more about verbal and non verbal communication skills. By this way their educational performance promotes widely

 

 

Establishment of new evaluation and accreditation system for Graduate Medical Education (postgraduate medical training) in Iran
Keywords: program evaluation, accreditation, postgraduate training, Iran
Authors: Masood Naseripour MD, Azim Mirzazadeh MD, Kamran Yazdani MD, MPH, Behirokh Raisi MD, MPH, Masoumeh Haghighi MD
Institution: Iranian Council for Graduate Medical Education

Summary: Graduate Medical Education is one of the stages of continuum of medical education in Iran. At present, near 5500 residents are being trained in 24 specialties and 21 subspecialties in 26 Universities of Medical Sciences. Based on a special Act of National Parliament in 1973, the Iranian Council for Graduate Medical Education is responsible for supervision on the quality of Graduate Medical Education. Despite significant activities in this field during last three decades, there has been no integrated effort for evaluation and accreditation of specialty education units, yet. In this regard, the Secretary of the Council appointed a committee for preliminary studies and presentation of new approaches. In this workshop, the presenters describe a summary of challenges of Graduate Medical Education and relevant supervisory bodies and the activities of this committee. Thereafter, the activities for the establishment of new accreditation system including development of educational standards will be described.

 

Assessment of clinical education of medical interns in internal medicine wards of Shiraz medical university
Keywords: assessment interns clinical education
Authors: moghadami, mohsen.amini, mitra
Institution: Shiraz medical university

Summary: The aim of this study is to determine quality of clinical education in internal medicine wards of Shiraz medical university. A questionnaire consisting of four main parts (1-emergency management 2-outpatient department management (OPD) 3-necessary clinical skills such as lumbar puncture, intubations, 4-hospitalized patient management) was designed. 40 interns that passed internal medicine ward were chosen as case group. Control group composed of 40 interns from other departments that they didn't pass the internal medicine department. In the first part (emergency management) the results of case group was better than control group (p<0.05).In the second part (outpatient department management) there was no significant difference between case and control groups (p>0.05). In the third part (necessary clinical skills) the results of case group was better than control group (p<0.05). In the forth part (hospitalized patient management) the results of case group was significantly better than control group (p<0.05). In general this study showed that educational methods in internal medicine department of Shiraz medical university in teaching emergency and hospitalized patient management and necessary clinical skills are acceptable but there is a need to educate interns more about OPD management and approach to common ambulatory diseases.

 

The quality survey of medical students and assistants practice in history taking and physical examination of patients
Keywords: Medical student, Assistant, History taking, Physical examination, Quality.
Authors: Kahooei, M.Hasani Shariat Panahi Shoherh.
Institution: Semnan medical sceinces university

Summary: Introduction: History taking and physical examination help physicians to find a valid diagnosis that base on it care process is provided. It is important that practice of medical students and assistants in teaching hospitals of Semnan university of medical sciences were surveyed.

Materials and methods: the study is descriptive and analytic whi1`ch surveyed 134 assistants and medical students history taking and physical examination in teaching hospitals of Semnan university of medical sciences in 1999-2000 years. The measurement tool was a forty section checklist that was used after its validity and reability, Data collection was done by indirect observation of interview between statistical society and patients and study of medical history and physical examination reports of patients.

Results: between educational location and educational courses to practice was significant (P=0.001). 51 percent of them were not able to find first diagnosis . Only %15 of them were able to obtain the more than %90 medical information from the patients by history taking and physical examination.

Conclusion: The practice of the society was undesired. The process of clinical education must be evaluated in outpatient and inpatient wards.

 

Priority of medical education objectives in basic sciences from the students, point of view
Keywords: medical education, goal,basic science, capability
Authors: Bazrafkan, L.; Nikseresht, A.; Bazargany, A.
Institution: Shiraz University of Medical Science

Summary: Introduction: This study aims at educational needs assessment in Shiraz university of medical sciences. Based on the findings of this study, the priorities of the objectives were determined using the students, attitudes.

Methods: This is a descriptive study. A questionnaire about general objectives of medical education based on revision of medical education was designed to measure the participants' opinion regarding objective of medical education. Validity of the contents was determined by using expert's opinions and reliability by lest – retest. About 120 questionnaires were distributed to interns. All data were analyzed with spss package , using the chi-square test.

Results: The results of this study reveal that the most important objectives in medical education in basic sciences period are the use of english language in daily conversation and also for reffering to english texts and journals, and ability to get access to and use up-to-date english scientific sources reasonable curiosity, active learning,research skills, communication and consultation were mentioned as priorities. Disscution and conclusion: Teaching capabilities should focus on process and The students must be supported in order to enable them to form the learning objectives.

 

Determine stressor in the first clinical experienc
Keywords: stressor &clinical experienc
Authors: Asemanrafat, N.
Institution: university

Summary: Introduction: In today world the speed of transformation extensivity of human experiences is increasing a campared with past and several stressor effect the humanity life. Neitherwe can omit the stress nor we can be away from sterssor. In researches in 1988 pagana indicated that the known stress cases include fear of making mistake, not to be accepted by trainer and feinaudibilities on the base of stress recognized in first clinical experience.

Materials: This studing is a descriptive witch examined 59 nursing students they were pasing their frist training course that for instanse they were studied.

Result: They propoumded that the average of ages of persons %79/7 belongs to persons between 20-25 year- old and 62/7% were not familiar with hospital enuironment and 75/8 % were not familiar with delegated duties and 59/3 % fear of wrong the activities and face the lows and conscientious undertakings and 35/6% fear of trainers examination.

Discussion: The experiences show that we must pay attention to the first clinical experience of cause of stressor and with the usage of Uariant ways help to decrease this causes so motivation and intrest will increase.

 

Comparing motivation of the nursing – midwifery students with the other fields related to medical sciences students for the continuing of education in the master
Keywords: motivation&master degree
Authors: Aminalsadat, A.
Institution: University

Summary: This is cross sectional and analytical research about compairing motivation of nursing – midwifery students with the other fields for continuing of education in the master degree . the instrument used for data collection was questionnaire two parts to achieve the research goals.

Results: The analysis of variance and T-test also showed that psychological motivation in the nursing –midwifery students was higher than the othe. Because of complication fields. In addition the analysis of variance = 25 and T-test = 2.26 which were done separately showed that as a whole that the social motivations in the nursing – midwifery group was higher than the other students. A.N.O.V.A.(2.47) and T- test (2.479) confirmed that familial motivations in the other fields related to medical sciences students is higher from the nursing- midwifery students motivation. also the answers mean scores to economical motivation questions in the nursing –midwifery students group (M=5.188) was lower than other group (M=7.60). the A.N.O.V.A.(5.185) and T-test (8.185) also was confirmed in this fact.

 

Investigating the midwifery students least availability to learning needs
Keywords: learning needs, Midwifery students, least avilability
Authors: Ehsanpour, S.
Institution: Medical Sciences University

Summary: Abstract. Title: Investigating the midwifery students' least availability to learning needs from the view pints of midwifery students. AUTHOR: Ehsanpour.Soheila.MS. Introduction: Evaluation process is the most efficient Factor for man kind progress in this recent century. This study has defined the least essentials of learning for the BS students of midwifery. Method: This is a descriptive study (CIPP) educational evaluation pattern. The population studied was composed of midwifery students (36 samples in two semesters). Collected by questionnaires (made by experts) through Delphi method. Items were investigated through watching, listening, Reading, experiencing and ability and skills scales. Results: the findings showed over 90% of the lessons have been taught in the course accurse according to the outline approved by committee. In clinical learning the students did not have enough experience in non-common items such as breech delivery, Forceps and women's cancer.

Discussion: The findings showed that the students have listened to the most materials in the class . The findings also showed that students are slow in up take for some special skills.

 

How do we assess clinical teaching? A thematic review OofF reliable and validated instruments
Keywords: evaluation, validity, faculty
Authors: Beckman, T.; Ghosh, A.; Cook, D.; Erwin, P.
Institution: Mayo Clinic

Summary: Background: Learner evaluations are widely used despite few existing standards for measuring learner assessments. Our objective was to review the published instruments for evaluating clinical teaching and to summarize themes for developing universally appealing tools.

Methods: Five electronic databases were searched using the terms validity, evaluation, faculty, and medical education. Over 330 articles were identified. Excluded were reviews, editorials, and qualitative studies. Twenty-one articles describing instruments for evaluating clinical faculty were found. Three investigators tabulated characteristics of the learning environments and validation methods. Salient themes amongst the evaluation studies were determined.

Results: Most studies combined outpatient and inpatient evaluations. Wide ranges in numbers of subjects, evaluations, and items were observed. The most common statistical methods were factor analysis and determining internal consistency with Cronbach alpha. The least common methods were test-retest reliability and convergent validity between validated instruments. Seventeen domains of teaching were identified. The most frequent domains were interpersonal and clinical-teaching skills.

Conclusions: Characteristics of evaluations vary between educational settings and between learner levels, suggesting that future studies should utilize more narrowly defined populations. Establishing temporal stability and convergent validity should be considered. Current data support the validation of instruments comprised solely of interpersonal and clinical-teaching domains.

 

Evaluation of senior medical studentsopinions about surgical education in medical university of Esfahan
Keywords: surgery, education
Authors: Hosseinpour, M.(MD). Behdad, A.(MD)
Institution: kashani hospital

Summary: Background assessment is an integral part for programs of ministry of health and medical education. In this study we evaluated the surgical educational program in medical university of esfahan.

Methods: In this study, 123 medical students were evaluated by a standardized questionaire.13 variables were included in study.

Results: Education in operating room, emergency ward education,resident attitude about education and final examination method were significant factors in overall score(P<0.05).

Conclusion: Considering the efficacy of education in operating room, it is recommended to improve this kind of education for quality improvement

 

Associate Professor
Keywords: Clinical Performance Assessment, Rubric,
Authors: Kwon, Hyungkyu; Lee, Giljae; Lee, Eunjung
Institution: Kyungsung University(Kwon, Lee Giljae); KAIST (Lee, Eunjung)

Summary: Web-based Clinical Performance Assessment Model Development Kwon, HyungKyu Lee, KilJaeLee, Lee EunJung. The clinical performance assessment using standardized patients is emphasized for measuring and evaluating clinical practice and performance capabilities of students objectively. However, it lacks acceptable standardized criteria for performance assessment and has the problems of validity, reliability, and fairness due to the differences of various evaluators and evaluation institutions. This research utilizes the rubric, the evaluation criterion for clarifying the level of outcomes in the process of performance assessment. Through the rubric, instructors can decide the performance standard for student's performance based on the data from learning outcomes and can obtain the guidelines for what to evaluate and how to score. And learners can get not only the role of self monitoring for study but also the motivation to achieve the goal. Web-based clinical performance assessment model clarifies objectives(skills, attitudes) for prepared problems and clarifies interaction roles of patients and students. The clarified interaction roles are practiced applying the modes of performance assessment for the evaluation criterion and evaluation method. The automatic/manual scoring is done based on the rubric. Also, the tool for the production and use of virtual standard patient is supported on the web environment. Rubrics for the produced standard patients and clinical performance assessment are accumulated in database and can be used in various synchronous/asynchronous clinical performance assessment. Learners can experience many clinical skills under various conditions and circumstances through clinical performance assessment.

 

Developing and Validating an Objective Structured Clinical Examination Station to Assess Evidence-Based Medicine Skills
Keywords: Evidence-based medicine
Authors: Gruppen, LD; Frohna, JG; Mangrulkar, RS; Fliegel, JE
Institution: University of Michigan

Summary: Objectives: Skills in Evidence-Based Medicine (EBM) have been identified by numerous medical education organizations as required competencies for students and residents. Although some tools for assessing EBM knowledge exist, there are few tools that assess competence in EBM performance. We have developed a computer-based objective structured clinical exam (OSCE) station to assess the student EBM skills and to evaluate the effects of curricular changes.

Methods: The web-based case requires students to read a clinical scenario and then 1) ASK a specific clinical question using the Population/Intervention/Comparison/Outcome (PICO) framework, 2) generate appropriate terms for a SEARCH of the literature, and 3) SELECT and justify the most relevant of three provided abstracts to answer the clinical question. Scores are computed for each of the three sections and overall.

Results: Two cohorts of third-year medical students were compared. The 2002 cohort had a minimal EBM curriculum whereas the 2003 cohort had an expanded, longitudinal EBM curriculum. Our assessment documented statistically and pragmatically significant effects.

 

Item Class of 2002
(N= 140)
Class of 2003
(N= 157)
Effect Size
ASK 22.7 26.0a 0.59
SEARCH 13.7 15.3a 0.52
SELECT Abstrract 22.3 23.4 0.10
Total Score 58.7 64.8a 0.46
& passing 229% 53%a 0.48
all three parts
a p< 0.01

Conclusions: Using this validated methodology, we were able to document a significant change in performance in two of three skills on the EBM station. We attribute this improvement to the changes made in our curriculum. This EBM assessment tool has also been used for first year residents and is being evaluated currently in a multi-institutional validation study.

 

Comparison of intern,s attitude related to social Medicin
Keywords: Attitude, Social Medicine, Intern
Authors: Jalili, Z.
Institution: assitant proffesor

Summary: Backgrround:Intern,s attitude have strong correspondence with their observation and judgments.It actually one of the effective factors influencing the development and modification of medical education.

Objective: The present study was carried out in order to comparing the the Attitude of interns before and after training course.

Methode: The quasi-experimental study was carried out via convenience sampling on 100 subjects in the 2002-2003.the data gathered via questionair with internal consistent coefficient (0.86) and (o.89) befor and After study,respectively. Interns filled pretest (before taking the courses) and post test (after taking the courses) questionairs whitch were compaed and analyzed through parametric and non parametric tests. 

Findings: There was significant relationship between mean of attitude score pre and post test (p<o.o5). In order to compare the ranking of each attitude statements during the two stages sing test were carried out .All 27 statements, showed significant difference (p<o.o5). No significant differences were observed between sex variables in pre and post training> courses. Conclusion: According result of the study researchers found out the important point that social medicin training courses had considerable effect on intens attitude and could cause alternations in their attitude to wards social medicin obejectives.

 

Tools to assess communication skills
Keywords: Communication skills, Tools, Assessment
Authors: Clèries, X.; Kronfly, E.; Barneda, N.; Ros, E.; Martínez-Carretero, J.M.
Institution: Institute of Health Studies

Summary: Three experts in communication have analyzed 120 videotaped interviews corresponding to 1 case of an ACOE of students of medicine, carried out during 2003. The target has been to validate a new tool of assessment of communication skills constructed from a questionnaire used since 1997. The obtained results, in comparison to the ancient questionnaire, have been:

 

Ancient New
Internal reliability (Cronbach's alpha): 0.89 0.94
Inter-rater reliability
(Coefficient intraclass correlation)
between experts for each item, IQ 95 %: 0.59 0.71

The expert knowledge of the examiners guarantees major reliability with regard to consistency, validity and discrimination. However, the point of view of the standardized patients must join in the process of formative assessment to undergraduate student.

 

Lengthy permanence of students in the Medicine Course of studies
Keywords: Lengthy permanence
Authors: Breglia R., Catarivas V Álvarez, S., Cabalier M.E.D de. Corigliani, S.
Institution: Facultad de Medicina, Universidad Nacional de Córdoba, Argentina.

Summary: Identify students who started the course of studies more than 15 years ago without passing subjects after quite a long time, and describes causes of such permanence in the institution, including similarities and differences which identify the population under study. Is a study based on the analysis and systematization of documentation about these Chronic Medicine-students and interviews held with the students to obtain further information on this particular problem. We try to determine the extent of this lengthy permanence, the amount of students in this condition, as well as the factors involved in this delay. Our aim is not to explain this phenomenon but describe how it has occurred from 1996 to date by considering the courses of studies corresponding to 1968, 1974, 1985 and 1993. This paper has been carried out in an old-fashioned manner since, due to the antiquity of the data, it was systematized without the help of technological resources. It was observed that out of 147 students in this condition, 32% are between 65-69 years old; 55.1% are males; 19.4% are married; 61% belong to the 1974 course of studies; labor reasons involve 20.4% of the cases; and, today, 55.10% are still attending the course of studies. We describe a population which registered in circumstances that were completely different from today's, which belonged to deteriorated socio-cultural environments with very different biographies, which did not adapt to the demands of regular study and which was not identified and supported by the institution in charge of its education.

 

Designing and Implementing an Institutionnal Assessment Plan
Keywords: university assessment plan, outcomes, curricular mapping, culture of assessment,
Authors: Hvidsten, L.; Threinen, N.
Institution: Northwestern Health Sciences University

Summary: This presentation reviews development and implementation of a university-wide assessment plan. The challenges of culture, resources, and deadlines are addressed as the audience is taken through the process of establishing institution-wide outcomes, curricular mapping, assessment, and documentation. This session focuses on the first year of developing and implementing an assessment program: how to get started, how long will it take, how to do it, who does it, and how to keep it going. The following are the learning objectives for this session: describe a program of university assessment, understand the development and on-going nature of an assessment program, and identify personnel resources and training.

 

Delivering GP appraisal in the UK-views of appraisers
Keywords: GP appraisal performance review
Authors: Jelley, D.
Institution: University of Newcastle

Summary: Background: All general practitioners in the UK are now required to undergo an annual appraisal.1. Most of these are being carried out by trained GP peers external to the GP's own practice. However in the North East region of England, both internal 2 (being appraised by an appraiser from the same practice) and external models of appraisal have emerged.

Study Aim: To define the perceived advantages and disadvantages of each model.

Methodology: Fifteen GP appraisers were selected randomly from the study population of trained GP appraisers, who, where possible, had experience of one or both types of appraisal model . Data collection is by face to face with tape recorded interviews.

Results: Where practice dynamics were robust, internal appraisal was rewarding and added to practice development plans. Some appraisers in both models were concerned about "knowing too much" –about their appraisee at times feeling uncomfortable about failure to discuss issues they felt were important but which were not raised by the appraisee. There was concern about the link to revalidation and lack of feedback on performance as appraisers. Critical success factors for GP appraisal are emerging and will be discussed more fully in the presentation.

References:

1. Department of Health. Annual appraisal for General Practitioners available on www.doh.uk/gpappraisal 2002

2. Jelley, D. van Zwanenberg, T. Practice-based peer appraisal in general practice: an idea whose time has come? Education for Primary care 2003; 14: 329-337.

 

Surgical-pathological correlation in acute appendicitis: experience matters
Keywords:
appendicitis, surgical-pathologic correlation
Authors: Lim, J. Shum, L.
Institution: Changi General Hospital, Singapore

Summary: Introduction: Acute appendicitis is the most common surgical emergency. Intra-operative recognition of pathology is crucial since the need for examination of pelvic organs depends on the surgeon's assessment of whether the appendix is pathological. Only when the appendix is deemed normal do surgeons proceed to inspect the rest of the pelvic organs to find an alternative etiology for symptoms.

Objective: To determine the discordance rate between surgeon intra-operative assessment of the appendix and pathology findings. Primary endpoint: Surgical-pathological correlation. Secondary endpoint: Correlation between seniority of surgeon and accuracy of surgical assessment.

Method: Retrospective review of 570 consecutive appendicectomies, looking specifically at surgeon intra-operative impression of the appendix as normal or pathological, and comparing this with the pathologist report (used as gold standard).

Results: 568 reports available for analysis. 281 males, 287 females with age range 11-90 years, median age 33 years.130 normal appendixes on histology. 438 abnormal (431 primary appendiceal inflammatory processes, 5 malignancies, 1 helminthic infection and 1 perforated diverticulitis)

Overall 13.7% discordance rate.

<6 months exp >6 months exp
Discordance 35/191 43/377
Percentage  18.3% 11.4%
P= 0.028 (two-tailed)

Conclusion: There is a significant discordance rate amongst junior surgeons (less than 6 months of general surgical operative experience). In the interests of patient safety,greater senior supervision intra-operatively or formal training in identification of intra-operative pathology should be initiated. Alternatively, junior surgeons should routinely inspect the pelvic organs for pathology even if the appendix is deemed abnormal.

 

A comparative study about results of clinical skills assessment
Keywords: Clinical skills, assessment, nurse
Authors: Molins Mesalles, A.; Solà, M.; Pulpón, A.; Juncosa, S.; Marinez Carretero, JM
Institution: Institute of Health Studies

Summary: Since 1995 the Institute of Health Studies asseses clinical skills of Catalan Nurses The clinical situations represent normal and dialy practice about the hospital and the primary health attention. The competence components analysed were: team work, history taking, identification of patient problems, planning therapeutic strategies, clinical intervention, preventive activities, communication skills, teaching ability, ethic components, research and clinical knowledge. The participants come from different Nursing University Schools of Catalonia. In the last three years, 583 students were assesed in their last month before their nursing degree. In last years, the information obtained wasn't already a part of a pilot test, it was an evaluation with an acceptable reliability. The results presented by three compared groups are similar. At 2001, the average competencies obtained were 60.00, in 2002 were 59.75 and in 2003 were 58.7 (at the present moment to fulfil the abstract we haven't contemplated the results of December 03). The average of the multistation phase was similar to 2002 with 62.08 and 2003 with 60.83. If we regard the written phase the average turns out similar in chronological order from 51.08 to 50.87. Throughout three years the two components with low results were: ethico-legal components and research. The components with the top results were team work, communication and teaching. Nowadays, the pourpose of the project is to decentralize it and to duplicate the circuit.

 

Faculty Evaluation:true or false
Keywords: Faculty, Evaluatin
Authors: Abdolreza Jahanmardi,Mortez Haghirizadeh Roodan, Hayat Mombeini,Roya Jahanmardi
Institution: Ahwaz Medical Sciences Universitry

Aim of presentation : student evaluatoin of teaching is one of the majore concerns in higher education. In this way, during the past 30 years hundreds of papers have been published reagarding various grounds from valid, reliable to usefull and useless, such papers cannot be easily summarized. The purpose of present work was twofold, first was to outline opinions of two groups of advocates and opposites about validity and reliability of SET*. The second purpose was to represent conceptual fallacies of SET process.

Summery of work: This study was a library research over SET . In this regard from one thousand papers , one hundred were choosen randomy since 1995 – 2002. The information were collected and analysed comparatively.

Summery of results: Findings showed that SET advocates believe that students have a metacognition , so they have a valid judgement over SET , but opposites state that students judgements are subjective, so not valid. The first group ( advocates ) says that SET is reliable because of correlation between SET of current students & alumni , and also similarity of SET results of one teacher in the same course through years is another indication of SET consistency. In other hand advocates say that SET reliability is affected by educational contextes , student characteristics, teacher characteristics and course characteristics. Conceptual fallacies of SET are : (a) that students are the only reliable information source (b) there exsits a unique and immutable metric termed "teaching effectiveness", and (c) opinion is a fact.

Conclusion / Take – home messages: Findings indicated that SET is not reliable and documentary as a sole source of teacher evaluation, so we must apply other approches of evaluation as complementary. These approaches must be aggregate measures of teaching performance to reflect items within teachers control, meanwhile conceptual fallacies of SET can not be remedied .

SET * : Student Evaluation Of Teaching

 

Can Clerkship Learning Be Horizontally Integrated?
Keywords: Integration
Authors: SL Hider, J Hadfield, D. Powley, S.Brown and T. Dornan
Institution: Hope Hospital, Salford, UK

Summary: Background: Educationalists and regulatory bodies advocate integration (1) but evidence of its success in clerkships is lacking (2). Aim: Evaluate students' experiences of integration. Context: Clerkship phase of a fully horizontal integrated, problem-based curriculum. Methods: Anonymised quantitative and qualitative self-report evaluation of a module that integrates musculoskeletal, neurological and mental health learning around the theme of "Mind and Movement". Students' Likert ratings of how integration affected their learning and free text comments on its principle and practice were analysed qualitatively by two independent observers, who arrived at an agreed interpretation. Results: 50 respondents made 52 comments broadly supporting the principle of integration and suggesting it is illogical to teach overlapping specialties separately. Some found the "shared theory" of different specialities helped them learn physical examination, whilst others experienced repetition and confusion. Students were negative about the practice of integration, citing "time pressure" as a major problem. Their narratives referred to the constituent disciplines of the module more than to its integrated set of learning objectives. Much teaching was along traditional "hospital specialty" lines, meaning it was left to students to find common ground between disciplines. However, specialty teaching was highly valued. Conclusion: The disciplinary orientation of teachers within a supposedly integrated module was partly responsible for students' sense of repetition, confusion and time pressure, but was also responsible for their best teaching. Future research should explore whether the tension between curriculum integration and specialty-based practice is irreconcilable.

 

Evaluation of a new program in International Health and Medicne
Keywords: International health, medical education, evaluation
Authors: Jotkowitz, A., Gaaserud, A., Heath, M., Bonawitz A., Gidron, Y., Margolis, C., Henkin, Y.
Institution: Ben-Gurion University of the Negev

Summary: Introduction: There is a need to train physicians in aspects of International Health and Medicine(IHM). In response to this necessity, Ben-Gurion University in collaboration with Columbia University inaugurated a Medical School with the purpose of training physicians in IHM. A curriculum in IHM was developed and clinical training in a developing country was required. In order to evaluate the program, The Beersheba Survey of Attitudes and Knowledge in IHM was used.

Methods: The survey consisting of questions relating towards attitudes in IHM, knowledge and clinical cases in IHM, was given to first year students before and after their introductory course in IHM and fourth year students before and after their IHM clinical clerkship. Analyses of variances were conducted followed by planned contrasts.

Results: The first year students significantly increased their knowledge and clinical knowledge in IHM but there was no change in attitudes toward IHM. The fourth year students had no significant change in attitude, knowledge or clinical knowledge after the clerkship.

Discussion: The students in the program had uniformly high attitudes toward IHM which did not decrease during the year. The pre-clinical curriculum was successful in increasing the students' knowledge but the clinical experience had little measurable impact. However, it was uniformly praised by the students. The lack of impact might have been due to the fact that the students received prior IHM clinical experience. Further research is needed to document the effect of the program on the clinical practice of the graduates.

 

Primary mental health care and measuring physician ability to identify pediatric mental health issues
Keywords: "Primary Shared Care" "Pediatric Mental Health" "Measuring Physician Needs and Skill"
Authors: Cawthorpe, D.
Institution: University of Calgary

Summary: Objective: To assess community physician needs and ability to identify mental health problems in children under 6 years of age.

Methods:

1) Primary Practitioner Needs Assessment: Two surveys were conducted, (Phase I - national; Phase II - regional). The purpose of the surveys was to identify the learning needs among primary care physicians and recruit interested local physicians into the consultation service.

2) Physician ability to identify pediatric mental health issues: Physicians completing Phase II were invited to participate in Phase III. Recruited physicians who had not participated previously in the consultation service were required to identify pediatric mental health problems in, at minimum, the next 20 pediatric clients (age 0-6 years) who visited their practices. Their rates of identifying pediatric mental health problems were compared to the problem identification rates of physicians who had previously participated in the consultation service.

Results: Eighty-seven per cent of the physicians from the national survey and 67% from the regional survey reported that they did not have enough knowledge and support to detect and manage mental health problems in young children. Compared to unexposed physicians to mental education through time spent with the collaborative care team, exposed physicians were significantly more likely to identify a mental health concern or risk for an infant or young child less than six years of age (OR 2.09; 95% Confidence Interval [1.40, 3.14]; p < 0.0001).

Conclusions: Physicians who were exposed to primary mental health care support are much better able to identify clients with mental health problems.

 

Core Skills in Women's Health- Outcome Evaluation
Keywords: pelvic examination, medical students, role-play, teaching associates
Authors: Carr, S.
Institution: University of Western Australia

Summary: Purpose: To evaluate the outcomes of introducing an educational program teaching medical students how to approach taking relevant and sensitive gynaecological histories and perform pelvic examinations through role-play with well women from the general community.

Study Design: Medical students and the women recorded their perceptions of the program over a two-year period. The outcomes of the program were evaluated by comparing medical student perceptions of confidence, competence and anxiety; the mean number of pelvic examinations performed during their course both before and after the introduction of the program and results of students' continuous and summative assessment.

Results: In the year prior to introduction of the program students performed a mean of 2.6 (95%CI 2.1, 3.0) pelvic examinations compared with 4.1 (95%CI 3.8, 4.4) in first year on implementation 2000 and 4.0 (95% CI 3.7, 4.3) in the second year of implementation (p< 0.05). Students reported improved competence and reduced anxiety to perform a pelvic examination without supervision (p<0.05). All students passed their continuous assessment. Between 92 and 100% of students and women agreed that the program had clear learning objectives, was well organised, was a useful and appropriate method of teaching which helped prepare them for the clinical setting.

Conclusions: This Pelvic Examination Educational Program has been positively evaluated by students and participant women and has resulted in a significant improvement in the amount of pelvic examination experience medical undergraduates obtain.

 

Assessment of academic staff evaluation program
Keywords: assessment -faculty member-program of evaluation
Authors: Rahimi, B.Zarghami N
Institution: oromiyeh university of medical science. Educational development center.

Summary: Background: The teaching capability of academic staff has a significant relationship with their awareness of the educational process and the evaluation program. It is necessary that academic staff are aware of their own teaching capability and are able to improve continuously the quality of their practice.

Aim: To determine an evaluation program for academic staff. Summary of work: The subjects of this analytical descriptive study include 70 of 150 academic staff of Urmia University of Medical Sciences who responded to questionnaires. Initially a questionnaire was prepared, containing closed and open ended questions about the evaluation process. To increase the reliability and validity of the questionnaire, it was piloted first. It was distributed and then collected by the researchers. Summary of results: The findings of this study revealed that 64% of academic staff was male and 36% was female. 35.65% indicated no knowledge of an existing evaluation process during teaching. 44.33% indicated lack of commitment for implementation of an evaluation process and 47.19% indicated lack of commitment of the authorities and disadvantages of evaluation. 63.5% of academic staff agreed to be evaluated at the end of courses and 70% agreed to take part in educational workshops as a feedback system.

Conclusion: It is speculated that evaluation could improve teaching skills.

 

Explicit transferable skills teaching: does this affect student attitudes or performance in the first year at Medical School?
Keywords: Transferable skills, attitudes, performance
Authors: Whittle, S. R. & Murdoch-Eaton, D.G
Institution: University of Leeds

Summary: Recent changes in UK school curricula have introduced optional Key Skills units, leading to qualifications in Communication, IT and Use of Number. These are designed to teach transferable skills in the context of students' A level courses. Approximately 20% of undergraduate intake at Leeds University Medical School possess some of these qualifications. This study was designed to detect differences between students with and without explicit skills qualifications. Students completed a questionnaire on arrival which asked how often they had practised a range of 31 transferable skills in the previous 2 years, and how confident they felt about their abilities in these. Studies are underway to determine any differences in performance between the two groups in medical course components with clear transferable skill objectives. Questionnaires were completed by 478 students (99 with Key Skills qualifications, 279 without). . Students with Key Skills qualifications felt that they had received more opportunities to practise information handling (p=0.01) and IT skills (p=0.02). They also felt more confident in these skills (information handling p=0.04, IT skills p<0.001). Limited evidence suggests that they rated their technical/numeracy skills more highly (p=0.06). Students who have received specific skills teaching demonstrate improved confidence in some skills, and there appears to be a positive relationship between confidence and opportunities to practise these skills. Initial results from an essay writing module however, suggest that students with key skills qualifications do not perform better. Later performance however may show differences. Should Medical Schools encourage students to achieve these qualifications?

 

Are Case Reports useful in Assessment?
Keywords: case report, extended matching item, assessment, correlation, learning skills
Authors: Round, J.
Institution: St. george's hospital medical school

Summary: Background: Communication, examination, investigation, knowledge and management can be evaluated separately using conventional means, but not together in real patients. Using literature in clinical practice, clinical writing and understanding the illness in the context of the patient are difficult to assess objectively in written or performance examinations. In a graduate entry programme case reports were used to examine these skills. Here the experience was reviewed.

Methods: 35 students submitted 92 case reports after attachments in paediatrics, medicine and surgery, following written guidelines and scored with a standardised mark scheme. They sat separate EMI exams in the same subjects. Both assessments were summative. Results of the corresponding papers were compared.

Results: Most students produced good reports, some exceptional, while some failed according to the scheme. Students scoring highest in EMI's scored highest in the corresponding case report. However, those with combined scores (report+EMI) below median showed an inverse relationship between EMI and case report marks (r2=0.27, p<0.01). Reports demonstrated understanding of information obtained from history, examination and investigations. Literature-based decisionmaking in the context of a patient could be seen in some reports.

Conclusions: Case reports provide evidence of many skills used together in patient management, so have a high degree of face validity. Reliability can be improved by standardising and refining a mark scheme. For lower scoring students, the inverse relationship between assessment scores may be because they have skills in fewer areas. Higher scoring students may have a wider range of skills.

 

The Use of Video to Evaluate Clinical Skills in Paediatrics
Keywords: video, OSCE, assessment, paediatrics
Authors: Round, J.
Institution: St. George's hospital medical school

Summary: Background: Objectively testing examination skills in paediatrics raises unique problems. Using general paediatric cases (as seen by GP's or non-specialists) is difficult as signs rapidly change and disappear. Children rapidly become tired, disinterested or non-compliant so the usefulness of a particular station alters during the exam. Lastly children require feeding and naps, which will not fit an exam schedule. Video has been used in examination of psychiatric patients and communication skills. To increase reliability and face validity of paediatric examination stations, video stations of children with visible signs were developed. This abstract details their content and usefulness.

Methods: Video stations used in a large (n=186) clinical OSCE for undergraduates, results of which are below. Stations consisted of 60-90 seconds of edited footage of children with acute problems (bronchiolitis, croup) or undergoing developmental assessment. With the station was an instruction sheet and written questions. Performance was compared with overall performance. Student opinions were obtained at interview.

Results: Students score a mean of 13.2/20 (SD 2.1) in the video station (OSCE mean 14.2, SD 2.8). Performance was well correlated to the overall OSCE result (r=0.32) with the mean correlation of each station being r=0.38. Students felt that the station was fair although many confessed to a temporary shock at the new assessment method.

Discussion: This video station compares well with other forms of examination assessment in paediatrics. Its quality may be increased as students become more familiar with this type of station.

 

Modification of U.B. Dentistry clinical cycle student's beliefs in Pharmacology
Keywords: modiication, beliefs, pharmacology
Authors: Sanchez, S.
Institution: Universitat de Barcelona

Summary: During the course 2000-2001 were evaluated the erroneuous beliefs of the University of Barcelona's studients of Dentistry, Medicine and Nutrition. Since in this academic course, the group of Dentistry students is in the last year of their studies, we intend to analyze the modification of their beliefs in pharmacology after havig carried out their clinicl learning and before the beginning of their professional pratice. To this end, we will use the same questionnaire, that consists of 30 items grouped in four categories: a) therapeutic indications, b) mechanisms of action and adverse effects, c) patterns of treatment, prescrìption and consumption and d) psicopharmacs, and we will compare it with the obtained results when these students were in their second year of the Dentistry studies. In function of the obtained results we will be able to analyze the evolution of their beliefs in Pharmacology ant to evaluate the impact of teaching imparted iln the 3 courses of the clinical cycle by professors not related to the pharmacology area.

 

QFD and continuing medical education
Keywords: QFD matrix , continuing medical education
Authors: Ruiz de Adana Perez R. Agrait Garcia P. Carrasco Gonzalez I. Duro Martinez JC. Rodriguez Vallejo J M.; Millan Nuñez Cortes J
Institution: Agencia Lain Entralgo

Summary: Quality functional deployment is a way to listen the customers and understand exactly what they are waiting and using a deductional (deductive) system,to find which is the best way to satisfy customer needs with disponible resources. QFD is a process design methodology for guarantee that customer voice is listened during the planning ,design and implementation of the product or service : listening, understanding, acting and translating what the customer tells us is the philosophical heart of QFD.

Objectives: To implement the planning of the continuing medical education matrix in Laín Entralgo agency based on quality function deployment model. To analize the matrix QFD analysis identifying and prioritizing the opportunities in the process of continuing medical education.

Methods and persons: A working group composed by 6 experts in planning continuing medical education implement QFD (quality function deployment) matrix identifying and analyzing the following segments: requirements of the customer (Which?). Characteristics of the process´ activities .(How?). The matrix relationship between which¨¨ and how¨¨. Competitive evaluation. Objectives of the processes activities. (¿How much?). Compliance evaluation of the processes characteristics. Technical and relative importance of every process activity.

Results: We show the QFD matriz of the continuing medical education process using the student requirements. The analysis of the QFD matrix identifies possibilities of improvement in the following activities of the continuing medical education process : identification of organizational needs, identification of professional expectatives (needs),making process of continuing medical education plan, design of educational courses, teachers selection, schedule and courses acreditation

 

Outcome of quality assessment of a cardiology residency as a result of joint brainwork of graduates and their present medical chiefs
Keywords: quality, program evaluation post-graduate
Authors: Alves de Lima, A., Terecelan, A., Nau, G., Botto, F., Trivi, M., Thierer, J., Belardi J
Institution: Instituto Cardiovascular de Buenos Aires

Summary: Experiences acquired by residents during residency programs (RP) do not always assure success in the working field.

Objectives:

a. to find out what graduates perceive regarding their degree of training acquired after residency period

b. to correlate the opinion of both Graduates and their present medical chief (PMC)

c. to determine whether graduates and PMC perceive existence of re-adaptation period (RAP) after conclusion of RP.

Method: the study was carried out in a University Hospital in Buenos Aires in 2003. All the G, graduated between 1998 and 2001. The G should identify their PMC. The PMC constituted the doctor responsible for the G for > 60% of his weekly working hours during the study period. Data was obtained through an 8-question survey. A qualitative and quantitative analysis ( CA) were carried. The Wilcoxon test was used for the CA.

Results: 15 G (100%) and 13 PMC were included. G showed great satisfaction towards received training during RP. In-patient care areas were specially identified. PMC judged the G as highly competent, particularly on in-patient care areas regarding counseling skills and overall clinical competence. Regarding RAP, 13 G and 8 PMA considered that it exists and that it lasts 385(±6) vs. 344(±5) days (p=NS) Conclusion: graduates expressed high satisfaction on their preparation and medical chiefs on their performance. Most of the participants considered that there is a re-adaptation period after residency. The present data provides evidence of the effectiveness of a program aimed at preparing doctors for medical practice.

 

Peer Participant Observation of Teaching (PPOT)
Keywords: teaching quality, tutor feedback, peer review, peer observation, small group teaching
Authors: Dowie, A., Duffy, R., Dowell, J.
Institution: University of Dundee

Summary: Teaching quality is a perennial issue in higher education, and this applies as much to medical schools as to any other academic department. Excellence as an educational goal is the primary motivation for this, but a recent court action won by a UK student against his university due to inadequate teaching standards, and the potential for such litigation elsewhere, are additional factors. Approaches to giving tutor feedback range from peer review of teaching (PRT) to peer observation of teaching (POT). An alternative to these in the context of small group teaching is a tutor support programme where the peer is neither a detached observer nor an extra tutor, but is rather an active participant in the group for the occasion. Instead of auditing and debriefing on the teaching session, the peer participant observer facilitates shared reflection on the educational practice that took place. This is followed up by a letter documenting the discussion, with feedback on the process received from the tutor. Tutors report that peer participant observation of teaching (PPOT) is useful in supporting and developing their practice, as well as being a non-threatening approach. The agenda of tutor appraisal procedures and the development of andragogical skills are separate and not always compatible. In settings where the teaching is delivered in small groups, PPOT offers a means of enhancing quality in which the focus is placed exclusively on the educational event.

 

Survey on educational programmes for resident physicians
Keywords: residents, programm, educational objetives
Authors: Tutosaus, J.; Martínez-Brocca MA, de la Higuera JM, Díaz-O J, Morales-Méndez S, Barroeta J.
Institution: Hospitales UU. V. Rocío

Summary: Objectives: 1. To know the opinion of professionals about the requirements Educational Programmes (EP) should meet. 2. To establish the difference of opinions according to professional status.

Material y methods: Opinion poll (n = 91, participation percentage 73.6%): 5 relevant questions in the evaluation process (Yes / No). Weighting of 24 parameters (from 0 to 10). Units: General and Digestive Surgery: 28 (42.4%), Endocrinology and Nutrition: 17 (25.8%), Nuclear Medicine: 12 (18.2%), Hospitalary Pharmacy: 9 (13.6%).

Results: Knowledge of the EP (88%). Necessary modification of the Evaluation System (76%). Final Evaluation Methods for Residents (47%). Final Evaluation for Residents and Work Opportunities (87%). Others: Abilities and attitude, prevents objective comparison among residents, scarcely representative of the resident´s training, subjectivity of the evaluative agent, poorly supported by the EP.

Conclusions: Updating the EP improves their quantitative evaluation. The current evaluation system of residents should be modified and we should consider the possibility of combining the opinion of the Teaching Commission with a single final test. Specialist physicians should be able to choose the evaluation of residents as an option included in their area of work. The selected evaluation parameters have been accepted and scored, though a slight quantitative difference is observed among them.

 

Validation of a Global Rating designed to assess communication skills
Keywords: communication skills, assessment, validation, global rating
Authors: Scheffer, S.
Institution: Charité Universitaetsmedizin Berlin

Summary: In the Reformed Medical Curriculum of the Charité-Universitaetsmedizin Berlin, training of communication skills runs continuously throughout five years of study. As an important part of this training, students practice with standardized patients who give feedback on the basis of a structured feedback guide, the Calgary Cambridge Observation Guide, CCOG, (Kurtz, Silverman & Draper, 1998). Up to now communication skills were not assessed, though. The aim of this study was to validate a global rating scale in order to integrate the assessment of communication skills in students` summative exams. An instrument designed by Hodges and McIlroy (2003) was translated and slightly modified to fit our context. Three different groups of raters were trained to apply this global rating and assessed the communication skills of 120 second and third year students during an objective structured clinical examination (OSCE): (1) The OSCE physician examiners, (2) communication skills teachers as experts, and (3) the standardized patients. A fourth group of communication skills experts rated the interaction between student and SP with a short version of the CCOG. The correlation between the Global Rating and the CCOG-Checklist will be presented as an aspect of concurrent validity. The interrater reliability as a further step in the validation process will be discussed. Implications for the future use of the instrument as a tool to assess students` communication skills will be outlined.

 

Five Years of Accredited Continuing Medical Education at the Academy of Medical Sciences of Catalonia and the Balearics
Keywords: Continuing Medical Education
Authors: Reig, J.
Institution: Department of Continuing Education Acadèmia de Ciències Mèdiques de Catalunya i de Balears. Barcelona

Summary: The study presents the results of an analysis of the new activities accredited as continuous medical education in the period 1999-2003 at the Academy of Medical Sciences of Catalonia and the Balearics (ACMCB). Accreditation was carried out by means of an assessment questionnaire drawn up by the CCFMC. Each activity was assessed independently by each of the members of the ACMCB Accreditation Committee. In the period studied, 214 new Continuous Medical Education activities were accredited of which 102 were put on by scientific societies, 91 by territorial branches of the ACMCB and 21 were activities organised directly by the ACMCB's Education Department. These activities comprised a total of 4,267 hours (mean: 20.2±14.1; range = 4 -120h) and the number of credits obtained from the activities assessed varied from 1 to 24 (mode=2). The most common type of accredited activities were courses requiring physical attendance (63%) and meetings dealing with a single specific topic (24%). Virtual courses accounted for 0.9% of the accredited activities. The methodology most frequently employed was that of a theoretical exposition followed by discussion (60%). Combined theoretical and practical activities made up 21% of the total and practical workshops 5.1%. Evaluation was assessed using questionnaires handed out at the end of the activity in question (50%), while multiple-choice tests or resolving cases to do with the course topic were used in 20% of the total. Most of these activities were aimed at specialist physicians (46%) and healthcare personnel in general (30%). Activities targeting general practitioners represented 10%.

 

Assessing the Generic Skills of SpRs
Keywords: Assessment Centre, Generic Skills
Authors: McMillan, J.
Institution:
The Yorkshire Deanery

Summary: Introduction: The Yorkshire Deanery and others are increasingly concerned about the difficulties inherent in assessing generic skills of doctors. There is a lack of rigorous assessment of competencies such as communication skills, leadership etc., and documented evidence. What we did. External consultants were invited to assist in piloting an Assessment Centre. A Pilot group was identified: 12 SpRs - 4 in Year One of training, 4 in mid-training and 4 in their final year. Key competencies were taken from the GMC Good Medical Practice and adapted to produce nine units of competence. Assessment exercises were designed which included an observed group discussion, an in-tray written exercise, a personal interview, a role-play scenario using trained simulators. Assessors were recruited - three Associate Deans and three non-medical assessors. All received whole day training in assessment techniques. Although randomly selected, we arrived at a good balance of candidates in terms of gender, specialty and large/small hospital. We had 100% attendance from the volunteer SpRs. Candidates were given feedback on their performance via telephone and a detailed written report.

Evaluation: After the event, both candidates and assessors were asked to formally evaluate the process. Candidates commented positively on the benefits both personally and professionally.

Next Steps: A similar pilot will be conducted in the Merseyside Deanery in April 04. Following this the applications of such Assessment Centres for doctors in training will be explored.

Ideas include: Development Centres for junior doctors. Final exit assessment and pre-entry qualification. Recruitment and selection.

 

Peer-Assessment and Tutor-Assessment in PBL Tutorials: Is there a relationship?
Keywords: PBL, Tutorial, Peer-Assessment, Tutor Assessment
Authors: Mona Al-Shamlan*, Raja Bandaranayake, Usha Nayar
Institution: College of Medicine, Arabian Gulf University

Summary: Purpose: To determine the reliability of peer- and tutor-assessment in PBL tutorials and determine if there is any correlation between peer- and tutor-assessment or between peer assessment and students' final grades.

Method: Likert-scale peer- and tutor-assessment forms were used. The forms were distributed at the end of the third year (6 year program). A total of 65 third year medical students and 7 tutors responded. The assessment items were: attitude and behaviour, acquisition of knowledge, problem solving skills, interaction skills, utilization of resources, and overall assessment. Open comments about areas of strength and weakness were requested from peers and tutors about each student. Finally peers' and tutors' opinions about peer-assessment were studied.

Results: The mean tutors' rating was significantly higher than the mean for peer ratings for all assessment items. The highest peer ratings was for attitude and behaviour (4.40), and the lowest was for interaction skills (4.01). The highest mean in tutors' ratings was for attitude and behaviour (4.71), and the lowest mean for interaction skills (4.29). There was a high Cronbach's Alpha for peer-assessment = (0.94), the Cronbach's Alpha for tutor-assessment was(0.88). Moderate to low correlation was found between peer- and tutor-assessment. Peer-assessment was considered to be a good idea by 80%of the students and 71.4% of the tutors.

Conclusion: Peer-assessment should be considered as a tool in curriculum planning for its effect on students' learning and improvement during medical education and their medical practice. Further research is recommended to test the effect of gender on peer-assessment.

 

Facilitating PPd using a learning portfolio:experience in a new UK medical school
Keywords: professionalism, reflection, protfolio, undergraduate medical education,assessment
Authors: Roberts, JH.
Institution: Phase 1 Medicine,University of Durham, Stockton campus,University boulevard, Thornaby, Stockton-on-Tees TS17 6BH

Summary: Purpose: To describe the process of using a reflective Learning Portfolio to assess second year medical students' personal and professional development (PPD) in a new UK medical school.

Methods: PPD at Durham covers ethics, communication skills, evidence based medicine, self care and clinical contexts of care. As part of their formative assessment, students were required to keep a learning portfolio for eighteen months exploring their development in these areas, supported by five prompts: Initial motivation for, and early impressions, of medicine, learning needs and achievements, 'critical incidents' and links between PPD and the wider curriculum. The portfolios were assessed by a PPD tutor and allocated a provisional mark, according to the evidence of reflection throughout the portfolio. This mark was confirmed after a 30 minute interview with the assessing tutor and student. The interview was an opportunity for tutors to give substantial and individual feedback to the student. Tutors were given training to guide them in the marking and conduct of the interview.

Results: Tutor and student inexperience combined to produce anxiety about the assignment and some reluctance to seek help. One finding was that students largely used the portfolio as a cathartic exercise which raised issues with confidentiality.

Conclusion: There is a fine balance between encouraging students to determine the content of their own portfolio and the need for clear criteria for assessment purposes. Tutors' enthusiasm and preparation for the activity are also pivotal in securing its success. We have responded to students and tutor feedback by shortening the length of the assignment and providing more support for tutors and students.

 

The evaluation of a medical curriculum: using the methods of programme evaluation to align the planned with the practised curriculum
Keywords: curriculum evaluation, quality assurance, medical curriculum, programme evaluation
Authors: Wasserman, E.
Institution: University of Stellenbosch, Republic of South Africa

Summary: Background: The current focus on the quality assurance of higher education in general and medical education in particular creates a need for a practical but methodologically sound approach to curriculum evaluation. This presentation describes an approach to curriculum evaluation in medical education based on programme evaluation methods used in the social sciences.

Aims & objectives: The aim of the presentation is to explain how the evaluation of a curriculum can be undertaken on the basis of the methodology of social scientific programme evaluation. The curriculum of the medical programme offered at the Faculty of Health Science of the University of Stellenbosch since 1999 is used as a case study to illustrate this approach.

Methods: Clarificatory evaluation is used to assess the planning of a curriculum (the planned curriculum). A Logic Model is constructed as a product of this clarification evaluation.

Results: Aspects of a Logic Model that is the product of the process of clarification evaluation of the medical programme offered at the Faculty of Health Science of the University of Stellenbosh will be presented to illustrate this approach.

Discussion and conclusions: Curriculum evaluation is an important component of the process of quality assurance. Aligning the planned; and the practised; curriculum as an approach to the quality assurance of a curriculum can be applied to any of the four types of academic reviews described by Trow. The approach described here is consistent with the definition of quality as fitness for purpose.

 

A pilot study on world federation for medical education (WFME) standards on basic medical education in Iran
Keywords: WFME-basic medical education - pilot study
Authors: Malakan Rad, E.
Institution: Educational Development Center of Kashan University of Medical Sciences

Summary: A pilot study on world federation for medical education (WFME) standards on basic medical education in Iran WFME standards was distributed in 2003 world wide. Kashan university of medical sciences (KUMS) was accepted to participate in pilto study. This study is performed to evaluate the degree of fulfilment of WFME standards in KUMS. The responses to questions on questionnaire B were statistically analyzed in two ways. First the responses to all questions were coded and data were entered into SPSS version 9. The results were presented as frequency distribution. Secondly in order to assess and compare our status in different items (from 1 to 9), a scoring system was innovated. In this system the maximum number was 15 and the minimum was –2. Data were entered into excel software and results were presented as bar charts for comparison. coverage of existing information was sufficient in 44.4% and 25% of items in basic standards (BS) and quality improvement standards (QIS) respectively. Existing data were up to date in 86.1% (BS) and 72.2% (QIS). Collection of new information was done in 88.9% (BS) and 69.4% (QIS). Appraisal was undertaken in 72.2% (BS) and 55.6% (QIS). Fulfilment of standards was achieved in 22.4% (BS) and 2.8%(QIS). Major strength and major weakness were present in 13.6% and 5.6%(BS) and 5.6% and 11.1% (QIS). consideration of changes was initiated in 77.8% (BS) and 55.6% (QIS). Changes were possible within existing rules in 72.2% (BS) and 44.4%(QIS). The highest score in BS was obtained in recruitment policy, physical facilities, educational expertise. The lowest score belonged to behaioral and social sciences and curriculum structure.

 

The Effect Of Testing Progress Within A Traditional Clinical Course
Keywords: progress-testing, assessment, anxiety
Authors: Badcock, L., Dennick, R.
Institution: Derbyshire Royal Infirmary, UK

Summary: Background: A progress test style of assessment has recently been introduced to final year, clinical students at our medical school. The aims of this assessment are to be both formative and summative and to encourage self-directed study and continuous learning without detriment to study behaviour.

Aim: To determine if the new assessment system achieved these aims, by exploring anxiety, student perception and study behaviour.

Method: Student anxiety (using a validated inventory (TAI)), perceived test importance (10 point scale), and study behaviour was measured shortly after two within-course tests and the final summative test. An inductive analysis was performed on free-text comments.

Results: The 50% sample did not differ in age, sex or final performance from the whole year. The within-course tests engendered very little anxiety and were perceived as less important than the final test. Students did not study specifically nor learn by rote for them, unlike the final test. Students were divided as to whether the tests made them study more continously or freely. Many perceived the tests simply as practise for the final test. The free-text comments supported and gave a deeper perspective on the quantitative findings.

Conclusion: Unlike the final test the within-course tests had no detrimental effect on study but the extent of any positive benefit was limited.

 

Assessment of educational program quality in Tehran university of medical sciences and health services, according to the referendum from the graduates
Keywords: Assessment, education, graduate
Authors: Farzianpour, F.
Institution: School of public health Tehran university of medical sciences and Educational development center

Summary: Introduction: Educational program quality assessment in university level is to determine: 1- The degree and extent of foreseen objectives for university fulfillment, and also. 2- The strength and weakness points of these assigned objectives. Educational program quality assessment is one of the most significant duties of universities of medical sciences. On the other hand, occupational capacity, ability and efficiency of medical graduates in order to offer the best health and treatment services, and to provide individual and social health, mostly depends on provision and fulfillment of the above-mentioned objectives. In case, the educational programs are not well designed and well-performed, there will be harmful cultural, social and economical effects, imposed to people, the graduates and also university credence and management. The general objective of this study is to improve educational program quality and to promote education in a university level.

Special objectives are:

1- To determine the total average scores of the graduates.

2- Distribution of age and gender.

3- Satisfaction.

4- Strength and weakness points.

5- Finally Educational problems of the graduates.

The survey method has been analytic-descriptive and the community under the study is about 178 graduates from medical faculty. All the data has been analyzed, based on the software: (SPSS 9, 10) Survey findings show that % 61.2 of the graduates are males and % 38.8 are females. The graduates average score is 15.75, but the standard deviation and the range is 1.23, 12.66, 18.55 respectively. About 78.7 percent of the graduates expressed satisfaction on their faculty. The most important strength and weakness point of the survey community has been %52.2 in training period and %78.7 in physio-pathological one, respectively. It is concluded that, the average satisfaction of the graduates on university level is 66.11 and there has been a significant promotion in educational program quality, in medical university.

 

Relationship between student self-evaluations and evaluations by examiners in OSCE
Keywords: OSCE, student self-evalution, evaluation by examiners
Authors: Ueno, T.
Institution: Kurume University

Summary: In 2001, the Japanese Ministry of Education, Science and Sport decided to introduce competency tests including Objective Structured Clinical Examinations (OSCE) and Computer-based Testing (CBT) for use nationwide in the preclinical evaluation of medical students. OSCE were administered to 4th year medical students at the Kurume University School of Medicine this year using 7 stations, and consisted of patient interviews, physical examinations of the head and neck, chest and heart sounds, abdomen, neurological system, basic surgical treatment and emergency treatment. The students were evaluated by two examiners and also carried out self-evaluations for each of these stations. We analyzed statistically the relationship between the student self-evaluations and the evaluations by the examiners. Results showed significantly positive correlations (patient interview; p<0.0001, physical examinations of the head and neck; p<0.00001, physical examinations of chest and heart sounds; p<0.000001, physical examinations of the abdomen; p<0.01, physical examinations of the neurological system; p<0.00001, basic surgical treatment; p<0.0001 and emergency treatment; p<0.00001) in the relationship between student self-evaluations and the evaluations made by the examiners at each station. These results suggested that the students at the Kurume University School of Medicine were able to evaluate their OSCE performance coolly and objectively, and that the OSCE examiners were evaluating student skills accurately.

 

OSCE: Challenging Korea's Medical Evaluation System
Keywords: OSCE, korea
Authors: Lee, Y.; Ahn, DS; Kim, MK
Institution: College of Medicine, Korea University

Summary: Since its introduction in the mid-1970s, the Objective Structured Clinical Examination (OSCE) has been applied broadly in North America and Western Europe. Subsequently, most of the existing literature on this subject is based on experiences gained in these same cultures. Korea, a nation rapidly modernizing but not yet 'developed', has met with challenges in applying the OSCE as is. These problems stem mainly from the differences in the educational environment and infrastructure in the medical schools. To reduce trial and error, cost inefficiencies and help accelerate the adaptation process, sharing the experience of applying the OSCE to non-Western medical education settings that has yet to implement a clinical skills test and/or are in the beginning stages of implementation could prove to be highly informative and beneficial. Introduced in Korea in 1994, the number of medical schools incorporating the OSCE and SPs into their curriculum has been increasing continuously. In this article, the authors would like to describe how and to what extent the OSCE has been applied in the Korean medical education system. Data was gathered during April and May, 2003 through a survey sent out to all of Korea's 41 medical colleges. 37 responded with 22 claiming to be administering the OSCE, 9 actively planning in order to implement the OSCE, and 6 having created management teams (e.g. OSCE Team, OSCE Research Team, Council for Clinical Skills Education, etc.) to design a concrete plan for the OSCE's implementation.

 

Assessing clinical analysts' competence
Keywords: clinical analysis, competence
Authors: Blay, C.; Ros, E.; Julià, X.; Juncosa, S.; Martinez-Carretero, J.M
Institution: Institute of health studies

Summary: Assessing clinical analysts' competence. Background: Since 1994, the Institute of Health Studies, jointly with the Catalan Medical Schools and professional associations, have conducted several projects on Clinical Skills Assessment (CSA). Summary of work: In 2002, with the collaboration of the Catalan Association of Clinical Analysts, a CSA prototype was designed to assess the competence of clinical analysis professionals. The pilot test was held on October 2003. The prototype consisted in a combination of a written examination and a brief performance-based multistation examination. Candidates were exposed to twenty-five cases. Summary of results:

- The examination scored highly on internal consistency with a Cronbach's alpha = 0.84

- Candidates' opinion showed high scores regarding face validity

- Contents validity is supported by the methodology used by the Exam Committee, by candidates' and observers' responses to questionnaires and by the comparison between examination contents and reference curricula.

- Construct validity: there's a good correlation between results and candidates' level of training and expertise.

- Predictive validity was also supported by the correlation among prototype results and a peer-review study.

Conclusions: The results are encouraging and, although this has to be confirmed in future editions, we think this assessment prototype has a good potential as a tool to certificate the quality of competence of the professionals in Clinical Analysis.

 

The relationship between group productivity, tutor performance and effectiveness of PBL
Keywords: Problem-based learning, tutoring
Authors: Dolmans, D., Riksen, D. & Wolfhagen, I.
Institution: University of Maastricht, Dept. of Educational Development & Research, PO Box 616, 6200 MD Maastricht

Summary: Tutor performance and tutorial group productivity correlate highly with each other. Nevertheless, for some tutorial groups, a discrepancy is found between the two variables. The hypothesis tested is whether a high performing tutor can compensate for a low productive group and whether a high productive tutorial group can compensate for a relatively low performing tutor. Students rated the tutor performance, the tutorial group productivity and the instructiveness of the PBL unit (1-10). In total 287 tutors were involved and were categorized as having a relatively low, average or high score on tutor performance. This was also done for the group productivity score. For each combination, the average instructiveness score was computed. The results demonstrated that the average instructiveness score was higher if the productivity score was higher. The instructiveness score was also higher if the tutor score was higher. However, the average instructiveness score did not differ significantly under different levels of tutor performance, whereas it did differ significantly under different levels of group productivity. It is concluded that a high productive group can too a considerable extent compensate for a low performing tutor, whereas a high performing tutor can only partly compensate for a relatively low productive tutorial group. The findings of this study are in line with earlier studies demonstrating that tutorial group productivity and tutor functioning interact with each other in a complex manner. The implications are that faculty should put more efforts in improving group productivity, eg by evaluating tutorial group functioning at a regular basis.

 

Improving Clinical Competence in Health Issues in a Third Year Pediatric Clerkship
Keywords: Health Issues; Pediatric Clerkship; OSCE
Authors: Bonet, N.; Márquez, M.
Institution: University of Puerto Rico, School of Medicine

Summary: Background: Objective Structured Clinical Examination (OSCE) is used in medical schools to evaluate clinical skills. At the Department of Pediatrics, School of Medicine of the University of Puerto Rico, students' development of clinical competence on health issues, e.g., growth/development, health maintenance, disease prevention and patient education have been a great concern, that is sustained by students' performance on USMLE Steps 1 and 2 and the NBME pediatric subject test.

Methods: To improve students' clinical skills, the faculty implemented the following interventions:

• Students must follow Guidelines for Health Supervision III (American Academy of Pediatrics) for interventions on ambulatory setting and assigned case presentations

• Students are required to present a lecture on the topic of growth and development

• An immunization lecture was added to didactic activities

• Faculty was asked to strengthen their teaching of health promotion, maintenance issues and disease prevention during clerkship rotations To assess students' skills after above interventions, the faculty, in 2002, restructured the pediatric OSCE to include one station exclusively for health maintenance and disease prevention.

Results: Outcome data for 2002 and 2003 indicate an improvement of students' performance on USMLE Step 2 on topics of preventive medicine and health maintenance. OSCE's mean scores show students with superior performance on the health maintenance station, overall mean of 85.68%.

Conclusions: There is evidence that selected interventions and OSCE stations to teach and evaluate clinical skills are effective.

 

What is being assessed? A case study from Sri Lanka
Keywords: Assessment, Bloom's taxonomy, Objectives, evaluation
Authors: Karunathilake, I.,McAleer,S.,Davis,M.H.
Institution: University of Dundee

Summary: In 1995, the Faculty of Medicine, Colombo, Sri Lanka, changed to a more student-centred, integrated, and system-based curriculum. The faculty objectives emphasised not only factual recall of knowledge but also higher order thinking, skills, attitudes and professionalism. Regular evaluation is the key to ensuring that the new curriculum objectives are being achieved. An evaluation of the assessment used in the Haematology & Immunology module, one of the body-system modules in the curriculum, was undertaken. The evaluation involved a content analysis of the structured essay questions (SEQ), the multiple true/false questions (T/F) and the Objective Structured Practical Examination (OSPE). The analysis involved two approaches; determining

1. the extent to which the assessment addressed course objectives

2. the cognitive level of the individual questions. The cognitive level was evaluated using Bloom's taxonomy. Assessment did not reflect all the course objectives. Basic science and clinical knowledge and disease prevention were assessed, but team work, professionalism and attitudes, statutory duties and research were not. The majority of questions tested factual recall of knowledge or understanding; 93% of the T/F, 56% of the SEQ and 70% of the OSPE. This study shows the usefulness of using a content analysis based on taxonomic principles. The findings of this study will be used by the faculty to improve assessment.

 

Do curriculum changes to a Paediatric Post-Graduate Program (PPGP) provide appropriate learning experiences?
Keywords: curriculum evaluation or pediatric residency
Authors: H. Amin, R.B. Scott, P. Veale, J-F. Lemay
Institution: Department of Paediatrics. University of Calgary

Summary: Title: Do curriculum changes to a Paediatric Post-Graduate Program (PPGP) provide appropriate learning experiences? H.Amin, R.B.Scott, P.Veale, J-F. Lemay. Department of Paediatrics, University of Calgary, Calgary, AB. Canada.

Objectives: To determine if: 1) curriculum changes introduced in 2000-2001 represent improvement to the PPGP. 2) one of these changes (subspecialty in-patient rotations) provided appropriate learning experiences that met paediatric objectives of the Royal College of Physicians and Surgeons of Canada (RCPSC).

Methods: 1) A written survey of subspecialists (SP) and community general paediatricians (CGP) in Calgary was conducted in summer 2002 to determine if curriculum changes introduced represented an improvement to the PPGP. 2) During a mandatory 4-week subspecialty rotation, paediatric residents (PR) kept a logbook of their learning activities (preceptor teaching, number of topics taught) and clinical work (number of patients seen). Problems encountered were linked to RCPSC objectives.

Results: 1) 78% (51/65) SP and CGP returned the survey. 72% SP and 63% CGP expressed satisfaction with curriculum changes. However, training in ambulatory paediatrics was felt to be insufficient with an over-emphasis of critical care and hospital-based paediatrics. 2) 64% (9/14) PR completed logbooks (mean 18.8 days). Preceptors provided teaching for 42.2 + 36 minutes/day. 54% of preceptor teaching was directly related to patients seen. 16.3 + 12.5 topics were taught. An average of 8.6 patients were seen per day. 90.6% of directed teaching activities occurred during the daytime. All teaching topics and most clinical problems encountered were included in the RCPSC Objectives. All residents expressed satisfaction with numbers of patients seen and with preceptor teaching.

Conclusions: Curricular changes improved our PPGP. Additional training was recommended in ambulatory/outpatient care. A 4-week pediatric subspecialty in-patient rotation can provide appropriate learning experiences.

 

How do standardized patients assess students communication skills using a Global Rating?
Keywords: Standardized patients, Communication skills, OSCE, Assessment, Global rating
Authors: A. Froehmel, I. Muehlinghaus, S. Scheffer, H. Ortwein, W. Georg, W. Burger
Institution: Reformstudiengang Medizin, Charité - Universitaetsmedizin Berlin, Germany

Summary: How do standardized patients assess students communication skills using a Global Rating? A. Froehmel, I. Muehlinghaus, S. Scheffer, H. Ortwein, W. Georg, W. Burger Reformstudiengang Medizin, Charité - Universitaetsmedizin Berlin, Germanyannette.froehmel@charite.de The Reformed Medical Curriculum started at Charité Medical School in 1999. It is the first curriculum in Germany which provides communication skills training and a standardized patient (SP) program throughout the whole curriculum. SPs are employed in the fields of history taking, interviewing and counseling skills as well as in Objective Structured Clinical Examinations (OSCE). Standardized patients are trained to give constructive feedback after the consultation using the Calgary Cambridge Observation Guide. The aim of this study is to validate a global rating scale to assess students communication skills during an OSCE at the end of the semester. We used a translated global rating scale developed by Hodges & McIlroy (2003). Three groups of raters were trained to assess students` communication skills:

(1) The OSCE examiners, (2) the standardized patients (3) the communication skills teachers as experts and benchmark. For validation purposes, SP ratings are compared with OSCE examiners and expert ratings. Results of correlation analysis will be presented and discussed.

 

The Effect of Educational Stressors on the General Health of the Medical Residents
Keywords: educational stressor, general health, medical resident
Authors: Khajehmougahi, N.
Institution: Ahwaz University Medical Sciences

Summary: Introduction: In the age of information and application of technology in today's knowledge area, troublesome regulations and traditional medicine instruction procedures may cause serious stresses and be a threat to General Health (GH) of the students of medicine.

Aim: The purpose of the present study was to determine the effect of current medicine instruction procedures on general health of residents studding in Ahwaz University of Medical Sciences.

Method: Type of the study was cross sectional. Subjects were 114 desirous to cooperation residents in different fields of specialized. The instruments were the Educational Stressors Questionnaire, including 45 four- choice item, and General

Health Questionnaire. After completion the questionnaires the results were analyzed through Pierson Coefficiency Correlation procedure using the SPSS.

Results: The residents mentioned their educational stressors as follows: Lack of an arranged curriculum, educational troublesome regulations, deficient educational instruments, and inadequate clinical instruction. 37.6 percent of the subjects appeared to have problems in GH, and, there was observed a significant positive coefficiency (p<0.01) between educational stressors with all the followings: GH, somatic problems, Anxiety, and with disorder in Social functioning.

Conclusion: As it appeared, educational stressors can be a risk factor for the students' GH, which may follow reduced interest, educational fall, and failing to achieve mastering the diagnosis procedures and treating ways. The study's findings suggest basic changes in the current medicine instruction ways.

 

Formative Assessment — Uses by Students at the University of New Mexico School of Medicine
Keywords: student assessment; formative assessment
Authors: Kalishman, Summers; Timm, Craig; McCarty, Teresita; Mines, Jan; Serna, Lisa
Institution: University of New Mexico School of Medicine

Summary: Competency-based assessment adopted in 1993 at University of New Mexico School of Medicine was revised in 2001 to include both formative and summative examinations. Formative examinations were unfamiliar concepts for many students and faculty; when adopted throughout Phase I (first 20 months of curriculum), students and faculty had different expectations about their meaning and use. To gain insight and understanding about expectations and uses, a survey was developed to assess students' and faculty members' perceptions about formative assessment. Results from the students' perspective are reported here. Results from faculty members will be reported later and will be compared with students' responses. Ninety percent (class of 2005) and 87% (Class of 2006) of the students completed the survey. Results indicate students use the formative examinations to identify faculty expectations, as study guide, to prepare for summative examinations, and to ascertain mastery of important concepts. Students also use formative examinations to practice different examination formats. A substantial number of students reported using the formative examinations to test understanding without studying. Only one-fifth of the students use formative examinations to identify need for additional academic support. Data from this report spurred dialogue between students and faculty about purposes and uses of formative assessment. Understanding that students approach formative examinations from different perspectives has been both an educational challenge and opportunity. The commitment to well-written formative examinations mirroring the difficulty, depth, and formats of summative assessment remains a goal for faculty, consistent with a philosophy supporting self-directed education.

 

The Freshstart Simulated Surgery and the EU Doctors Induction Scheme
Keywords: OSCE, General practitioners, European Union
Authors: Burrows PJ, Khan AA, Trafford P, Jackson N
Institution: Royal College of General Practitioners and Department of Postgraduate General Prectice Education, London Deanery.

Summary: There is an urgent need to enhance the GP workforce in London to meet the demands of the National Plan for the NHS (2000). Recruitment of qualified general practitioners from the European Union is part of the strategy to meet this need. The Freshstart Simulated Surgery is being used to examine the consulting and clinical skills of potential recruits from Europe. We report on its use with the first cohort in the London Deanery EU Induction Scheme. In Spring 2003, recruitment teams visited Paris and Madrid, following national advertisements seeking fully qualified GPs to work for a year in general practices in North London. Eighteen GPs were invited to London where they underwent further language assessment and undertook the simulated surgery ("entry" OSCE) to establish their learning needs.

Results: Ten EU doctors accepted the 3-month pilot induction programme, comprising seven Spanish GPs (four female and three male) and three French GPs (all male). The standards of their English language competence (assessed by the Oxford Placement Tests) ranged from 'lower intermediate' to 'advanced'. The results of the "entry" OSCE predicted those with significant difficulties during the programme. An "exit" OSCE was done at the end of the 3-month programme, which showed an improvement in scores for eight of the ten doctors. It was felt that the induction period was too short and eight of the doctors undertook extensions of between 6-12 weeks. The next programme has been extended to five months.

References:

1. Burrows P, Khan AA, Bowden R, Jackson N

The 'Fresh Start' Simulated Surgery Education for Primary Care (2004) (In press)

2. Heatley R, Trafford P, Khan AA, Cook V, Jackson N

The EU Doctor Induction Programme – the first cohort. In press

 

Temperament, character, and academic achievement in medical students
Keywords: Temperament, Character, Academic Achievement
Authors: LEE, YM; HAM, BJ; LEE, KA; AHN, DS; KIM, MK; CHOI, IK; LEE, MS
Institution: College of Medicine, Korea University; College of Medicine, Hallym University

Summary: Objective: This study investigates the relationships between TCI dimensions and the academic achievements of medical students.

Method: Our sample consisted of 119 first-year medical students at the Korea University Medical School during the 2003–04 academic year. The Temperament and Character Inventory (TCI) was administered to all participants during one class in the third quarter of the first academic year of medical studies. In addition, first-year grade-point average (GPAs) scores were obtained. We examined the relationships between individual TCI dimensions and the GPA scores in the analysis by using correlation coefficients.

Results: Our results suggest that NS (Novelty seeking), P (Persistence), and SD (self-directedness) dimensions are associated with academic achievement in medical students. Medical students scoring high on NS and low on P and SD were significantly less likely to sit examinations successfully.

Conclusion: Dimensions of the personality play a major role in the academic achievements of medical students. Personality assessment may be a useful tool in counseling and guiding medical students.

 

Senior Lecturer in General Practice
Keywords: Assessment, Outcomes, Performance, Summative, Formative, Safety, Effectiveness
Authors: Williamson, M.
Institution: Otago Medical School

Summary: The achievement of Safe and Effective Clinical Outcomes: A Measure of Student Performance.

The Department of General Practice, Dunedin School of Medicine, has developed an assessment process based on the achievement of Safe and Effective Clinical Outcomes for patients. The rationale for using clinical outcomes as assessment criteria is discussed, and safety and effectiveness are defined for this context. The advantages, disadvantages and implications of the method are explored with reference to both its formative and summative functions. Preliminary results are shared for discussion with data from both undergraduate and postgraduate students. Students work through a series of carefully controlled clinical situations, which together simulate a general practice clinic. Information and advice may be accessed in ways typical of everyday clinical practice but not usually permitted in high stakes assessments such as OSCEs. The desired clinical outcomes are specified for each scenario. These are based on current evidence and are patient-centred and context-specific. The proficiency of practice (the ease with which safe and effective outcomes are achieved) is assessed by measurement of time and resource use. This is a reflection of the relative usefulness of a clinician in a given clinical context Student feedback indicates these clinics provide great learning opportunities. Initial results suggest that students' proficiency increases with experience but experienced students may be less likely to seek advice, sometimes resulting in failure to achieve safe and effective outcomes. A possible trend that sacrifices safe and effective outcomes for proficiency has significant implications for medical care and medical education.

 

High states undergraduate OSCE?s: what do you do for students who require supplementary examinations?
Keywords: OSCE, supplementary examination, practicality
Authors: Worley, P. and Prideaux, D.
Institution: Flinders University

Summary: Increasingly, medical schools are using large scale OSCEs to examine students at key progression points in their undergraduate courses. The reliability and validity of this method of testing is extremely important in a culture where society is demanding high quality standards and students may involve lawyers to overcome perceived unfairness in assessments. Large scale OSCEs require a large commitment from a wide range of clinicians and support staff in both the University and the associated clinical services. This commitment may be given once a year, but what happens when a student is eligible for a medical/compassionate or academic supplementary examination, especially when this examination contributes to a ranking process that determines future career options? And if students with a medical supplementary then qualify for an academic supplementary examination, can you mount a third OSCE? This paper will examine this important assessment challenge, from both educational and practical perspectives, based on the experience at the Flinders University School of Medicine. We will present a range of solutions to this difficulty and will invite debate from others? experiences in meeting this challenge.

 

Use of student feedback by clinical teachers: evaluating evaluation?
Keywords: Evaluation, Feedback, Clinical Teachers
Authors: Boggis, C. Sarah Smithson
Institution: Manchester University, School of Medicine

Summary: Context: In 1999 a Manchester teaching hospital initiated a web based evaluation system collecting quantitative and qualitative student feedback on clinical education. This feedback has been provided to clinical teachers following each module and as annual summaries for five years.

Aim: To understand how clinical teachers use this feedback and how the evaluation system itself might be improved.

Method: Four approaches were used: focus groups, workshop, surveys and a pilot review technique. Focus group semi structured discussion about current utilisation of feedback identified the need for staff development. Creation and delivery of a workshop entitled Using feedback to improve your teaching produced a participant-designed questionnaire exploring feedback use by faculty and a pilot review whereby evaluation statements were reviewed for importance by Year 3 students and Year 3 and 4 teachers.

Results: The 6 focus group participants valued both quantitative and qualitative feedback, using it to inform their own and their team's teaching and to improve placement organization. The 7 workshop participants validated focus group statements and suggested all teachers be surveyed about importance, omissions and need for support in addition to questions about utility. Survey results (10% response rate) found the feedback valuable but wanted student suggestions about improving clinical learning experiences. The pilot review suggests some mismatch between teacher and student views of importance of current feedback items.

Conclusion: Evaluating one's evaluation system motivates users to reflect on utility and importance, highlights new content areas to be included and identifies needs for additional training for faculty and students.

 

Using portfolios to develop and assess student autonomy and reflective practice
Keywords: portfolio assessment, reflective practice
Authors: Toohey, SM, Hughes CS, Kumar RK, O'Sullivan AJ, McNeil HP
Institution: University of New South Wales

Summary: The Faculty of Medicine at the University of New South Wales implemented a new undergraduate program in 2004, which focuses on achievement of a set of eight graduate capabilities. The program emphasis is on producing doctors who have a well integrated knowledge base, are capable of evaluating their own performance, and of setting their own learning agendas. Students have substantial freedom to pursue topics that interest them through project or clinical work. The flexibility of the program, as well as the focus on developing student responsibility and reflective practice, called for a different approach to assessment. As part of an assessment scheme which includes written and clinical exams, individual assignments and group projects, students present a portfolio of their work at three points in the six year program. Students must pass each of the portfolio assessments to progress to the next phase of the program or to graduate. This paper focuses on the distinctive design features of the UNSW portfolio. These include the use of the portfolio as a tool to help students take responsibility for planning and managing their own learning. Marking against the graduate capabilities through all aspects of the assessment system enables a student to present a profile of performance in regard to each of the capability areas. Included in the portfolio are selected assignment and project work, the full range of teacher grades and comments given in relation to each capability, peer feedback on team work and the student's own self assessment and reflection.

 

Learning medicine in primary care: What do final year students think?
Keywords: learning medicine; primary care; inter-professional learning
Authors: Pearson, David.; Lucas, Beverley.
Institution: Bradford City Teaching PCT & University of Leeds Medical School

Summary: Increasing numbers of U.K. medical students are learning medicine in primary care. Previous studies have explored student perceptions of early clinical attachments and one-one teaching in primary care. We will present the views of final year students from one UK University, who have the opportunity to undertake four-week medical firms in a primary care setting. This qualitative study examines student's expectations, experiences and perceptions of the value of such placements, ascertained from a series of focus group interviews. The findings suggest that before the placements students expected a high degree of individual attention, with more opportunities for informal and formal teaching than in the hospital setting. They were concerned about 'hanging around', 'observing instead of doing' and spending too much time with other professionals. Their experiences suggested both these hopes and fears were realized. The students considered primary care teaching was more organized, of better quality and provided more appropriate feedback and assessment than some hospital experiences. The students had a chance to see chronic illness, though concerns were raised about the lack of access to acutely ill patients. There were differing student views about the value of inter-professional learning opportunities within primary care. We suggest that students value well planned, structured teaching with a variety of clinical exposure and that primary care can deliver this to the satisfaction of final year medical students. We raise suggestions for future research. What is the added value of teaching in primary care? Why do some students dislike inter-professional learning opportunities?

 

Facilitating the integrated small group Tutorial: The University of Transkei (UNITRA) experience
Keywords: PBL; small group tutorial; Facilitation
Authors: Iputo, J.
Institution: University of Transkei

Summary: UNITRA adopted the PBL/CBE medical curriculum in 1993. Formal lecture-based learning was replaced by the integrated small group tutorial where the teachers role is that of a facilitator of the learning process rather than a resource expert. This study describes the evaluationby students of the tutors performance as facilitators; and constructs the profile of a UNITRA tutor. The first 3 years of the curriculum is divided into 10-week blocks. Students evaluate tutors at the end of each block.These evaluations are reviewed and analysed for trends and similarities. 135 teachers tutored in 460 small groups lasting 10 weeks each.In general tutors had regular attendance; were punctual; and showed enthusiasim for the group. They were better at content facilitation than process facilitation Tutors were proficient in keeping the groups on track; on giving feed-back to groups; and in helping groups to function. They were good at asking probing questions; at identifying learning errors; at encouraging the pursuit of learning issues; at integrating basic and clinical sciences and they shared their experiences. Tutors were less proficient in keeping time; in giving feed-back to individuals and didn't give enough direction in the clinical reasoning process. They didn't often raise psychosocial issues and many tended to teach during the tutorial.

 

Involving students in standard setting procedures for OSCE's?
Keywords: Assessment, OSCE, Standard Setting
Authors: Georg, W.; Scheffer, S.
Institution: Charité - Universitätsmedizin Berlin AG Reformstudiengang Medizin

Summary: The Reformed Medical Curriculum as parallel track at the Charité - Universitaetsmedizin Berlin is organized in blocks according to organ-systems and periods of life. PBL is the essential learning and teaching method. At the end of each semester students are assessed with summative exams composed of an OSCE and MCQ's. For the OSCE the mean score over all stations is computed. As an absolute standard, 60% was set. After the first experiences with OSCE's we recognized the need to compare different standard setting methods to put our decisions on a sound basis. We decided to compare the Angoff method with the Borderline approach because both methods seem to be realizable in our context. Furthermore we compare the results of "Angoff procedures" established by 3 different panels (1. content experts, 2. students, 3. content experts and students mixed). In the Angoff procedure generally content experts estimate the probability that a borderline student will pass an item. We assume that students who just finished a semester are likewise "experts" and therefore we included student judges in the standard setting process after they had passed the exam. Students might have a more realistic estimation of what can be learned in a given setting and period of time. They are "experts" in running through an OSCE, an experience most of the teachers have never gone through.We will present the standards set by different methods and different groups. The integration of students in the standard setting process of OSCE's will be discussed.

 

Clinical Skills Assessment at Medical Schools in Catalonia (Spain) in the year 2003
Keywords: Keywords: Assessment, clinical skills, undergraduate, OSCE
Authors: Viñeta M, Kronfly E, Gràcia L, Majó J, Prat J, Castro A, Bosch JA, Urrutia A, Gimeno JL, Blay C,
Pujol R, Martínez JM.
Institution: Institut d'Estudi de la Salut

Summary: The Institute of Health Studies jointly with the Catalan Medical Schools have conducted several projects on Clinical Skills Assessment using OSCEs since 1994. In 2003 an Objective Structured Clinical Examination (OSCE) to assess clinical competences for final year medical students was used in seven Catalan Medical Schools. A multiple-station examination, with 14 cases distributed in 20 stations, and a written test, composed of 150 MCQ (20 questions with pictures associated) , was designed to assess medical competences. A questionnaire to be answered by the candidates was distributed and implemented at the end of the exam in order to find out the examinees' opinion. The OSCE scored highly on internal consistency with a Cronbach's alpha = 0.86 for the multiple-station examination and 0,83 for the written test. The global mean score for the test was 61.82 % (sd: 6.7). The mean scores, obtained by the 422 medical students who completed the OSCE, for every specific competence assessed, were as follows: history taking 64.8 % (sd: 8.4), physical examination 51.4 % (sd: 11), communication skills 61.4 % (sd: 6.2), knowledge 58.1 % (sd: 10.4), diagnosis and problem-solving 59.8% (sd: 8.9), technical skills 73.9 % (sd: 12.4), community health 64.5 % (sd: 13), colleague relationship 48.6 % (sd: 9.9), research 62 % (sd: 22.5) and ethical skills 62.4 % (sd: 17.1). The examinees' opinion for the organization, contents and simulations was high (main score was more than 8 points in a likert scale over 10 points). OSCE based methodology has proved to be a feasible, valid, reliable and acceptable tool to evaluate final year medical students in our context.

 

 

Advanced OSCE Osaka Trial –Statistical Analysis of assessment
Keywords: OSCE, National Board examination
Authors:
Yoshida, I., Inutsuka, H., Abe, Y., Otaki, J., Ohno, R., Kuramoto, S., Saito, N., Tanabe, M., Tsuda, T., Deguchi, H., Nakajima, H., Ban, N., Fukushima, O., Fujisaki, K., Yoshida M.and Hatao, M.
Institution: Committee on Advanced OSCE for National Board examination sponsored by Ministry of Health, Labour and Welfare, Japan

Summary: Purpose: We conducted statistical analysis of data of assessment sheets in Advanced OSCE Osaka trial held on October 2003. In this conference we report the results of the analysis in two stations of "pharyngeal pain" and "palpitation" of totally six stations. In this OSCE the number of examinees was six and the number of raters was four or three. Mainly the following were examined: (1) How is the degree of agreement of rank of score given for examinees between raters. (2) How is the degree of agreement based on assessment for each item in assessment sheet between raters.

Method: A correlation coefficient and Kappa coefficient of the agreement index were calculated.

Results: The degree of agreement of rank of score given for examinees was low between raters in the "pharyngeal pain" station, while in " palpitation" station the degree was high. However the degrees of agreement based on assessment for each item were high between raters in the both stations. Rather, the degree was higher in "pharyngeal pain" than in "palpitation". That is, Kappa coefficients are 0.80 and 0.76 in "pharyngeal pain" and "palpitation", respectively. In order to examine the incompatibility, two factor ANOVA with two factors of examinee and rater was performed. The results denoted that the patterns of score of examinees among examinees were very different between two stations.

Conclusion: In order to understand correctly the reliability of assessment of raters, it may be useful to analyze the data of assessment for each item as well as total score of assessment sheet.

 

 

Is it possible to conduct high-stake oral examinations in a reliable and valid way for small numbers of candidates with limited resources?
Keywords: Oral examination, structured oral examination, high-stake examination, limited resources, reliability, validity, MCQ, feasibility
Authors: Westkämper R1, Hofer R1, Weber M2, Aeschlimann A3, Beyeler C4
Institution: 1Department of Medical Education, University of Bern, 2Stadtspital Triemli, Zürich, 3RehaClinic, Zurzach, 4Department of Rheumatology and Clinical Immunology/Allergology, University of Bern, Switzerland

Summary: Background: Medical societies face the challenge of ensuring high quality certifying examinations with optimal utility (reliability, validity, educational impact, acceptability, costs).

Aims: To assess reliability and to consider aspects of validity of a structured oral examination (SOE) in a small medical society.

Methods: Thirteen candidates took part in the certifying examination based on a blueprint of the Swiss postgraduate training program in rheumatology. A multiple-choice-question (MCQ) test was followed by a SOE [3 teams of 2 examiners testing 3 cases each in two hours according to previously agreed on criteria]. In addition, communication skills (CS) were assessed on a rating scale [9 items, Likert scale 1 to 4]. Data were analysed by SPSS.

Results: The cases were solved on average by 92% of the candidates (range 77-100%). Correlations of the competence demonstrated in one case with the sum of the results achieved in the other 8 cases ranged from –0.14 to 0.97, indicating a wide range of discrimination power. Nevertheless, overall reliability was high (Cronbach-a 0.88). Significant correlations were found between SOE and CS (r = 0.88, p < 0.001), SOE and MCQ (r = 0.58, p = 0.038), but not between CS and MCQ (r = 0.46, p = 0.110).

Conclusions: Our SOE assessed medical competencies that seem more closely related to CS than factual knowledge tested by MCQ tests and yielded a high reliability. Our design and the efforts of the examiners contributed to a high validity. All together resulted in a satisfactory quality with an acceptable utility.

 

 

Using real patients in clinical examinations: A questionnaire study
Keywords: Patients; Paediatric; Clinical Exams
Authors: Williams, S.; Lissauer, T.
Institution: Royal College of Paediatrics and Child Health

Summary: There are a number of publications detailing the experience of examiners and candidates during clinical exams. There is little, however, which has documented the experience that patients have despite specific concerns in the use of real patients in such exams.

The aim of the current research is to investigate the experience of parents and children who participate in the MRCPCH Part two Clinical and Oral examination and to open up the debate on the ethics of using real patients in clinical exams. Questionnaires were sent to all centres hosting the MRCPCH clinical examinations in June 2003 and February 2004 to capture both quantitative and qualitative data.

Overall the results suggest that the majority of children and parents found taking part in the clinical examination a positive one. Multiple regression analysis highlights administrative variables (such as the length of time involved and the conditions at the centre) rather than the consultative variables (such as the interactions with the candidates and examiners) as a major factor in having a negative experience. Whilst this type of research is relatively new, the results of the present survey do suggest that far from being a traumatising or abusive experience that the vast majority of children found taking part in the exam an enjoyable experience. They further suggest that careful attention to the timings and structure of the exam could help to eradicate the potential for a negative experience.

 

 

Educational assessment of consultation competence
Keywords: Educational assessment, continuing professional development, competence
Authors: McKinley, R., Turner J.H.
Institution: University of Leicester and Leicestershire, Northamptonshire & Rutland Postgraduate Deanery

Summary: Introduction: Although the clinical consultation or encounter is the core of any physician&#8217;s practice,(1) the competences required to conduct a clinical encounter are seldom targeted by programmes of continuing professional development.(2) Furthermore, education and training in these competences is difficult to obtain.(2) Finally, there is evidence that uniformed self assessment is a poor guide to competence(3) so physicians are likely to need assistance to identify their learning needs. We therefore developed and piloted a programme which offered serial educational assessment of consultation competence to family practitioners. We will present what we learnt about the challenges of organising such a programme.

Methods: This was a qualitative study in which researchers independent to those who ran the pilot programme interviewed participating physicians and analysed the interview transcripts.

Results: A total of 54 physicians participated; 43 were family practitioner principals (independent contractors to primary care organisations) and 11 were salaried doctors (employed by primary care organisations). Serial educational assessment can be affirmative and supportive but there are significant challenges to success. These arise from the process of the assessment and the culture from which it arises, the participants and the continuing support they require.

Conclusions: Serial educational assessment of consultation competence can be successful but requires careful planning, preparation, delivery and &#8216;aftercare&#8217;.

1. Spence J. In National Association for Mental Health, ed. The purpose and practice of medicine, pp 271-80. Oxford: Oxford University Press, 1960.

2. Middleton JF, McKinley RK. Education for General Practice 2000;11:307-11.

3. Epstein R, Hundert E. JAMA 2002;287:226-35.

 

 

Who teaches in teaching hospitals and why?
Keywords: commuication valuing teachers training IT based survey
Authors: Turner C, Shonibare T, Jones R, Wipliez M and Belfield P
Institution: Leeds Teaching Hospitals

Summary: As part of a month long Special Study Module, two students (TC and ST) developed an intranet based questionnaire about teaching which was sent to 533 consultants within the Leeds Teaching Hospitals NHS Trust. The questionnaire was designed as a web page using Microsoft Front Page which allowed immediate formatting of data so that respondents could view the current status of the survey. Individuals were asked about their commitment to teaching, training and about communication from the Trust and the Medical School. They were also asked about suggested improvements and whether they felt valued. Response rates were low with 84 respondents (22%). Time constraints were the biggest barrier to teachers and half the respondents felt staffing levels were insufficient. Respondents felt undervalued by the Trust and the Medical School and communication from both organisations with teachers could be improved. 83% of respondents had received training about teaching and the majority felt they would benefit from further training. Despite the small sample size some conclusions can be drawn from this work. It suggests greater emphasis is needed on joint working between the Trust and Medical School and on how we value teachers. The IT based technology which was easy to use provides a very useful vehicle for other surveys

 

 

Comprehensive Assessment of Specialist Competence: An Integrated Model of Evaluation
Keywords: CanMeds Competence Evaluation Blueprint
Authors: Gary Cole, MA, Ph.D.; Senior Research Associate, Royal College of Physicians and Surgeons of Canada Nadia Z. Mikhael, MD, FRCPC, FCAP Director of Education Royal College of Physicians and Surgeons of Canada
Institution: Royal College of Physicians and Surgeons of Canada

Summary: Comprehensive Assessment of Specialist Competence: An Integrated Model of Evaluation. The certification of medical specialists requires comprehensive assessment of a broad range of competencies. Current curriculum models of medical education that are being adopted throughout the world incorporate multiple competencies ranging from informatics to medical expert and in order to certify candidates it is essential to evaluate candidates in all of these competencies. While some of the competencies can be evaluated in a final examination, a number of the competencies are best evaluated in-training. The Royal College of Physicians and Surgeons of Canada uses the CanMEDS competency curriculum framework for training and has developed a model of assessment that integrates an in-training evaluation system with a final examination. This system of evaluation entails the use of multiple blueprints that show the relationship and complementarity between the competencies measured during in-training and those measured at the final examination. Thus the two modes of evaluation complement each other and share accountability for the full range of competencies required to certify candidates.

 

 

Assessment of Medical Students' Competence in Clinical Breast Examination
Keywords: Clinical Breast Exam, Clinical Skills Assessment
Authors: Margaret C. Duerson, Ph.D., Jacqueline K. Woodard, ARNP, Rachel Boulmay, M.D., Lou Ann M. Cooper, M.A.E. and Rebecca R. Pauly, M.D.
Institution: University of Florida

Summary: The University of Florida College of Medicine provides clinical breast examination (CBE) instruction during the second year. Instruction involved a lecture/demonstration using palpation of breast models and a gynecological teaching associate session to develop proficiency. Students demonstrated competence following instruction. The purposes of our study were to determine the maintenance of CBE proficiency and establish the effects of clinical experience. To assess maintenance of proficiency, students performed a CBE on a standardized patient as part of a nine station clinical assessment examination at the end of third year. On the 12-item CBE checklist, 85% (74 of 87) scored 9 or above. Also, students were surveyed on the extent and nature of their experience with CBE. Response options consisted of a 10-point scale, 0 to _ 10. Survey question 1 asked how many times the student had performed a CBE since initial training. Question 2 related to the number of times the student had been observed by faculty performing CBE. Question 3 asked how many times the student had observed faculty perform CBE. The survey item means and standard deviations were: question 1 M =6.08, s.d.=2.95; question 2 M =4.07, s.d.=2.66; and question 3 M =5.68, s.d.=3.35. We concluded that ability to perform CBE competently is retained 1 year after formal instruction. Students who indicated _ 10 on the survey did not perform significantly different on CBE than students with less experience. In this group of students, clinical experience does not appear to diminish or enhance performance on CBE.

 

 

Student Attitudes About Clerkship Quality: What Makes a Difference?
Keywords: clerkships, evaluation, student attitudes
Authors: Baillie, S.; Relan, A
Institution: David Geffen School of Medicine at UCLA

Summary: Abstract

Background: Medical educators aim to develop educational programs that are high quality and perceived valuable by students, in order to standardize the curriculum towards more favorable learning outcomes and attitudes. However, it is widely documented that students' clinical training is characterized by unpredictable variability in the type of patients seen, quality of mentoring, nature of feedback and students' own motivation towards a particular specialty. The purpose of this study is to determine whether a common set of components which influences student attitudes can be identified from a study of seven clinical clerkships, based on standard evaluations of these clerkships.

Methods: A recent implementation of a web-based, "Course Eval" clerkship evaluation system at the David Geffen School of Medicine, UCLA has elicited students' attitudes towards seven required clinical clerkships, with a response rate of 95% across numerous hospital and ambulatory sites. A regression analysis of student perceptions of thirteen clerkship evaluative questions, spanning seven clerkships will be presented suggesting statistically significant components that students' perceive to be most valuable in clerkship quality. Areas that are evaluated by students include clerkship orientation, explanation of goals, grading, feedback, patient exposure, objectives achievement, faculty supervision, appropriateness of responsibility, resident and faculty clinical teaching, didactic sessions, skill observation, site quality, overall quality. As well, a qualitative analysis of student comments evaluation will be presented.

Conclusions: Knowledge of experiences that students' perceive as making a difference in clinical clerkships will provide important information to standardize didactic strategies across clerkships, and hence strengthen students' clinical experience.

 

 

Curricular Reform: Student Attitudes Towards Expansion of Problem Based Learning Tutorials
Keywords: curriculum reform, problem based learning, student attitudes
Authors:Baillie, S.
Institution: David Geffen School of Medicine at UCLA

Summary: Research has shown that problem based learning adds a new dimension to student learning compared to the traditional-based lecture curriculum. At the David Geffen School of Medicine at UCLA, the first two years of the curriculum have been redesigned and have incorporated a substantial increase in case-based problem based learning tutorials to the new curriculum. PBL case tutorials have increased markedly from 13 to 28 in the first year and plans include adding 24 to the second year this fall. The purpose of this study is to examine student attitudes towards the PBL experiences and to document their attitudes in relationship to other course components.

Method: All 147 students in the first year curriculum completed web-based evaluations through the 'Course Eval' system of the Problem Based Learning curriculum and their tutors in each of four new curriculum blocks. For the pbl sessions students were asked how helpful this course component was in meeting course objectives. For course tutors, a series of eight questions were asked about tutors and their PBL teaching skills. They were also provided with a comment field in both areas. PBL curriculum is also compared to other curricular components. Overall tutor evaluations are also reported. Response rate ranged from 98-100% for each block.

Conclusions: Knowledge of student attitudes toward curricular experiences that enhance learning in the first year provides important information for medical educators aiming to stregthen student learning experiences.

 

 

A Meta-Analysis of the Published Research on the Predictive Validity of the MCAT on Medical Students Cognitive and Non-Cognitive Outcome Measures
Keywords: MCAT, Predictive Validity, Meta-Analysis
Authors: Donnon, T.; Violato, C.; Oddone Paolucci, E.
Institution: University of Calgary

Summary: The MCAT remains one of the primary cognitive measures and criteria used for the selection of candidates to medical schools. In the present study, a meta-analysis of published research in referred journals on the predictive validity of the MCAT was completed on 132 articles that fit the following inclusion criteria: 1) studies had to focus on the use of the MCAT in the prediction of at least one of seven dependent variables as defined in pre-(e.g., course based grades/scores, objective structured clinical exams, and non-cognitive attributes such as empathy/altruism) and post-(e.g., USMLE Step I & II Exams, MCC Qualifying Exams I & II) medical school; 2) the studies had to have presented empirical findings on at least one of these dependent measures; 3) the studies had to have been presented in a peer-reviewed journal that was published as a printed periodical; and 4) only studies that presented sound psychometrically defined dependent measures were selected (i.e., standardized instruments, summative examinations, objectively scored observational ratings, etc.). The effects sizes coded were validity coefficients (e.g., correlations, regression coefficients). A number of moderator variables were coded (e.g., demographics- sex, age, ethnicity). Both weighted and unweighted effect size analyses were conducted as well as moderator analysis.

 

 

Variation on a theme: the use of standardized health professionals (SHP) in an objective structured clinical examination (OSCE) in neonatal-perinatal medicine
Keywords: OSCE, Standardized Health Professional, Neonatal
Authors: Brian Simmons, Ann Jefferies,Deborah Clark, Jodi McIlroy, Diana Tabak and Program Directors of
the Neonatal-Perinatal Medicine Programs of Canada (2002-03)
Institution: Depts. Of Paediatrics, University of Toronto, Toronto; University of Calgary, Calgary. Wilson Centre for Research in Education, University of Toronto, Toronto, ON, Canada.

Summary: Background: Standardized patients (SPs) are traditionally used in the OSCE to portray patients or parents. We developed an OSCE for subspecialty trainees in Neonatal – Perinatal Medicine that included SHP roles.

Objective: To compare reliability of SHP and SP stations.

Design/Methods: Two OSCEs conducted in 2002 and 2003 consisted of 14 SP stations, 8 SHP stations and 1 post encounter probe. SHPs included respiratory therapists, nurses, physicians and a medical student. Examiners completed station specific checklists, global ratings to assess CanMEDS roles (medical expert, communicator, collaborator, manager, professional, scholar, health advocate) and an overall global rating. SPs and SHPs completed communication global ratings. Projected alpha coefficients (to a ten-Station OSCE) were calculated, using Spearman-Brown Prophecy formula.

Results: 54 trainees participated. As shown in the table, alpha coefficients were greater than 0.70. There were no significant differences in reliability between SP and SHP stations (p > 0.05). Reliability was consistently higher with global rating scores.

 

Checklist  CanMEDS Communication
Score Score

2002

Stations with SHPs 0.71 0.87 0.79
Stations with SPs  0.85 0.92 0.93

2003

Stations with SHPs 0.74 0.85 0.84
Stations with SPs 0.75 0.87 0.90

CONCLUSIONS: SHPs may be used in OSCE stations, which require medical knowledge and expertise. SHPs could be used in high stakes exams. A formal training program should be considered.

 

The Achievement of Safe and Effective Clinical Outcomes: A Measure of Student Performance
Keywords: assessment, performance, summative, formative, safety, effectiveness
Authors: Williamson, M.
Institution: Otago Medical School

Summary: The Department of General Practice, Dunedin School of Medicine, has developed an assessment process based on the achievement of Safe and Effective Clinical Outcomes for patients. The rationale for using clinical outcomes as assessment criteria is discussed, and safety and effectiveness are defined for this context. The advantages, disadvantages and implications of the method are explored with reference to both its formative and summative functions. Preliminary results are shared for discussion with data from both undergraduate and postgraduate students. Students work through a series of carefully controlled clinical situations, which together simulate a general practice clinic. Information and advice may be accessed in ways typical of everyday clinical practice but not usually permitted in high stakes assessments such as OSCEs. The desired clinical outcomes are specified for each scenario. These are based on current evidence and are patient-centred and context-specific. The proficiency of practice (the ease with which safe and effective outcomes are achieved) is assessed by measurement of time and resource use. This is a reflection of the relative usefulness of a clinician in a given clinical context. Student feedback indicates these clinics provide great learning opportunities. Initial results suggest that students proficiency increases with experience but experienced students may be less likely to seek advice, sometimes resulting in failure to achieve safe and effective outcomes. A possible trend that sacrifices safe and effective outcomes for proficiency has significant implications for medical care and medical education.

 

Looking Back: Retrospective Self-evaluation of Feedback Skills
Keywords:
Feedback, Retrospective, Self-evaluation
Authors: Harrison, A., D'eon, M., Nation, J.; Sadownik, L., Harasym, P.
Institution: University of Calgary, University of Saskatchewan, University of British Columbia, CANADA

Summary: How insightful are learners about their own skill level? There is evidence that suggests students are better at assessing their own performance after they receive education about the subject matter. This collaborative research project, with three Canadian Universities, compared methods of evaluating learning after two half-day workshops, designed to teach postgraduate medical trainees (residents) how to give feedback. Three groups used a standardized checklist to assess residents' performance giving feed back.

A) Self-assessment by participants about their performance giving feedback to 'students' was obtained on three occasions: the first was before doing the workshop, the second was after the workshop, and finally, a retrospective self-evaluation was filled out after the workshop asking participants to re-assess how well they gave feedback before they took the workshop.

B) Evaluation of residents' performance by 'standardized students' (trained actors) who assessed residents' performance giving feedback before and after the workshop, and,

C)Evaluation of residents' performance by external raters. The trained raters who assessed the videotapes of residents giving feedback were not informed which tapes were made before and which were made after the workshop. The results of the various evaluations are presented and compared. The intent of the study was to determine if self-evaluation (particularly retrospective evaluation) by participants provides information that is comparable to that obtained by other more 'objective' (as well as more time consuming and more expensive)evaluations by 'students' or by independent raters viewing videotapes.

 

Clinical skills assessment in medical last year students: analysis of a six years experience
Keywords: OSCE, medical students
Authors: Descarrega-Queralt, R.; Blay-Pueyo, C.; Solà-Alberich, R.; Castro-Salomó, A.; Vidal-Marçal, F.; Masana-Marín, L.
Institution: Facultat de Medicina i Ciències de la Salut. Universitat Rovira i Virgili

Summary: Purpuse. Since 1995, clinical skills assessment (CSA) has been carried out with last year students at the Faculty of Medicine of Universitat Rovira i Virgili (Reus-Tarragona, Catalonia). We analyse our experience with the introduction of new CSA tools in a medical school where traditional knowledge-based assessment methods still are the rule.

Method. Selection of the clinical situations included in the objective structured clinical examination (OSCE) format examination's was carried out by standardised consensual agreement from a group of Catalonian medical school professors. From 1995 to 1997, evaluated competence components were anamnesis, physical examination, communication skills and interprofessional relationship. In 1998, by the introduction of new competence components and new instruments the variability in stations' format increased. After examination, candidates fills an anonymous questionnaire to value OSCE's acceptability, educational impact and the relevancy of its contents.

Results. A total of 385 last year medical students have been exposed to OSCE (1995-2000). Number of days in those the test has been administered varied along the years depending on the amount of candidates, the number of cases and the efficiency of the multistation circuit (from 4 to 7 days). Mean results of candidates' opinion questionnaires are high. Comparison of the global results of common cases of last two years revealed statistically significant difference. Communication, interprofessional relationship and preventive remain stable along the days.

Conclusion. OSCE is a valid, reliable and acceptable tool to evaluate clinical skills in medical last year students, moreover it's becoming a positive educational impact in our medical school.

 

In what competence component of an OSCE test Family and Community Medicine residents obtain better results?
Keywords: Competence, Family medicines
Authors: Ruiz E*, Cots JM*, Sellares J*, Florensa E*, Saenz JI*,Gámez X**, Rodríguez MA**, Sanchez Chamorro, Emilia*** *Family physician. Steering committee. Spanish society of Family Medicine. **Staticians. Steering committee. Spanish society of Family Medicine. ***Ministry oh Health
Institution: Spanish Society of Family Medicine

Summary: Objectives: To study the differences in the scores obtained in the competence components of a test of objective structured clinical examination (OSCE) test developed by family and community medicine specialists (FCMS) to residents at the end of their residence.

Methodology: Participants were 362 doctors: 90 men and 272 women. There were 25 stations; each of them lasted 6 minutes. Seven components were evaluated: anamnesis, physical examination, technical skills, management, family care, preventive activieties and communication. Each station evaluated between 1-3 components.

Instruments: standardized patients, dummies, simulators, telephone consultation, clinical cases and images. Descriptive statistics and analysis of variance (ANOVA) were used in the statistical analysis.

Results: The overall mean score was 55.2 (standard deviation 5.72), no sex differences were observed (p = 0.526).

Competence scores by sex(see attached Fig1)

Conclusions: All the components of the test achieved mean values close to the overall score of the test, which demonstrates a valid and balanced design. The scores in anamnesis, physical examination and technical skills are higher than the overall score. Management skills were significantly lower. The worse score was obtained in preventive activities. Women score more than men in: anamnesis, family care and preventive activities. Men scored better than women in technical skills. It is not known whether differences are due to training factors or to sociological factors.

 

Assessment of the construct validity of an OSCE test when finishing residency training throught the results of a survey
Keywords: OSCE, validity, residency training
Authors: Florensa E*, Cots JM*, Sellares J*, Ruiz E*, Saenz JI*,Gámez X**, Rodríguez MA**, Sanchez Chamorro, Emilia*** *Family physician. Steering committee. Spanish society of Family Medicine. **Staticians. Steering committee. Spanish society of Family Medicine. ***Ministry oh Health
Institution: Spanish Society of Family Medecine

Summary: Objective: To determine construct validity of an objective, structured clinical examination (OSCE) test carried out at the end of the residency training in family and community medicine through a specific survey.

Methods: Indirect method was used to assess the construct validity of the test. At the end of the OSCE test in 25 stations it was requested to candidates to answer a questionnaire with 6 questions about validity of the test. The opinion was assess through the level of agreement with the question in a semiquantitative scale that ranged from 1-10. 1 indicated total disagreement and 10 indicated total agreement.

1.- ¿the presence of observers has negatively influenced on my activities? 2.- ¿The number of OSCE stations is sufficient to assess the clinical practice? 3.- ¿The stations represent the daily clinical practice of a family physician?(FP)? 4.- ¿The stations represent problems that a FP should manage? 5.- ¿The level of stations is reasonable? 6.- ¿Do you believe that this test measures better the professional competence than a typical theoretical exam? Percentage, mean and standard deviation for all the values were calculated for each question.

Results: N=362. Participation rate was 94.3%. Results are shown bellow (see attached fig 2)

Conclusions: According to answers of questions 3 and 4 the test wsa succesful to simulate usual medical practice of a FP. The rest of assessed items indirectly support a high construct validity of the test.

 

Use of an intranet for the development of cases to be used in a objective structured clinical examination test
Keywords: Intranet, OSCE
Authors: Cots JM*, Sellares J*, Florensa E*, Ruiz E*, *,Gámez X**, Rodríguez MA**, Sanchez Chamorro, Emilia*** *Family physician. Steering committee. Spanish society of Family Medicine. **Staticians. Steering committee. Spanish society of Family Medicine. **Ministry oh Health
Institution: Spanish Society of Family Medicine

Summary:

- Objective: To determine the usefulness and confidentiality of an Intranet to develop the clinical cases in a test of objective structured clinical examination (OSCE)

- Methodology: A specific Intranet was assembled to develop the cases to be tested in the OSCE stations. The access was made with two levels of security: personal identification and password. Each member of the test committee (TC) introduced the different parts from the case separating: clinical content and evaluation list. The rest of members of the TC could access to the total set of the cases and make corrections if necessary. The evaluators-observers of the OSCE cases could access the cases in two consecutive periods. During the first period they had access to the content of the clinical case (2 weeks before the test) and during the second period they had access to the listing of key points to be evaluated (72h before the test). Each evaluator was requested to sign a confidentiality agreement.

- Results: A total of 14 members of the TC and 80 evaluators-observers participated through the Intranet. The total number of meetings of the standart committee was 6, whereas by means of the Intranet only 2 meetings were necessary. The number of revisions of the cases through the Intranet was between 6 and 8, whereas the number of revisions in the standard committees was from 4 to 6. Each member of the CP made an average of 4 corrections per each case. The confidentiality of the test was maintained since not a single incident was reported.

- Conclusions: The use of an Intranet for the development of cases for an OSCE test is efficient since it reduces the number of meetings of the test committee and increases the number of corrections of the clinical cases without affecting its confidentiality

 

 

Assessment at the end of the residency training: pilot study through an OSCE test
Keywords: OSCE, residency training
Authors:Cots JM*, Sellares J*, Florensa E*, Ruiz E*, Saenz JI*, Gámez X** Rodríguez MA**, Sanchez Chamorro, Emilia*** *Family physician. Steering committee. Spanish society of Family Medicine. **Staticians. Steering committee. Spanish society of Family Medicine. **Ministry oh Health
Institution: Spanish Society of Family Medicine

Summary: Objective: To assess the validity, acceptability and reproducibility of an objective, structured clinical examination (OSCE) test as the final evaluation at the end of the residency training.

Methods: An OSCE with 25 stations evaluating different competence components: anamnesis, physical examination, communication, technical skills, management, family care and preventive activities. The following instruments were used: standardized patients, open questions with short answers, clinical cases, telephone consultation, images and dummies. Global results by component and station were calculated. Reproducibility of the test. Item analysis by case and candidate.

Results: A total of 362 family physicians carried out the test. Mean score and standard deviation was 55.1. Mean score by component was as follows (see atteached Fig3) *Result expressed as percentage of the best possible score. Cronbach alpha was: 0.67. Global score per cases and components showed significantly differences between candidates.

Conclusions: Final test at the end of the residency training obtained good results in reproducibility and internal consistency. The acceptability and approach to reality evaluated by candidates wsa high. This type of evaluation can be used to determine the level of competence of physicians when finishing their residency training.

 

Effect of the OSCE test on the final score in Family and Community Medicine in multiple consecutive locations
Keywords: OSCE, Family Medecine
Authors: Florensa E*, Cots JM*, Sellares J*, Ruiz E*, Saenz JI*,Gámez X**, Rodríguez MA**, Sanchez Chamorro, Emilia*** *Family physician. Steering committee. Spanish society of Family Medicine. **Staticians. Steering committee. Spanish society of Family Medicine. **Ministry oh Health
Institution: Spanish Society of Family Medicine

Summary: Objectives: To assess the effect of an objective, structured clinical examination (OSCE) test carried out in multiple locations at consecutive schedules in family physicians at the end of their residency training.

Methods: 25 stations, 6 minuts in each one Implemented in four sites, with the following order: Madrid, Sevilla, Bilbao y Barcelona, during four consecutive weekends, within three daily shifts. 362 candidates, belonging to different Teaching Units, asignated to the locations according to their preferences and distance to their home residence. Within each location a random circuit was constructed. Analysis of variance (ANOVA) was used to assess the possible influence on the score of the variation in location and shift when implementing the test.

Results: ANOVA within locations: location influences significantly the score ( F=13,22 ; p< 0,001). Carrying out multiple comparisons among pairs of locations, scores in Madrid turned out to be significantly higher than the rest of the locations (p< 0,001), without differences among them. ANOVA within teaching units: statistically differences were observed among the scores according to the teaching units where the candidates come from (F=1.884, p< 0.001).

ANOVA within shifts: ANOVA was calculated for each location. No statistically differences among the scores according to the shifts were observed.

Conclusions: Besides the higher scores obtained in Madrid, there is no effect according to the sites and shifts. Observed differences in the final score are possible due to external determinants to the test: differences in education, learning effect, place of residence.

 

Relationship of length of post-graduate training to candidate performance on a high stakes clinical examination
Keywords: licensing examinations, length of post-graduate training, clinical skills examination
Authors: Wood, TJ, Smee, SM, Blackmore, DE
Institution: Medical Council of Canada

Summary: Problem Statement: The purpose of the study was to determine if the length of post-graduate training influences candidate performance on a high stakes clinical skills examination.

Methods: Scores from Canadian first-time examination takers who had attempted a clinical skills licensing examination (MCCQE Part II) were analyzed to determine if length of post-graduate training influenced candidate performance. Type of post-graduate training and ability on a general knowledge licensing examination (MCCQE Part I) were also considered as factors.

Results: There was a gradual decline in all examination scores with an increase in length of post-graduate training. This effect occurred irrespective of the type of post-graduate training and across all ability levels on the MCCQE Part I. There was also a slight increase in scores from one to two years of post-graduate training followed by a decrease but this pattern was most pronounced for family medicine trainees and those candidates who had lower scores on the MCCQE Part I.

Conclusions: The implication of these results for administrators of high stakes examinations and for candidates who delay taking examinations until they have more experience will be discussed.

 

General physician view about communication skills & patient education in Shiraz –Iran
Keywords: communication skills & patient education
Authors: Najafipour, F.
Institution: valfajr health center

Summary: General physician view about communication skills & patient education in Shiraz –Iran Fatemeh najafipour – Azam najafipour-Bagher Nasimi Nowadays clinical competency of physicians usually judged based on communication with patient. Effective communication between physician and patient is one of the most important steps to improve level of health and prevention in society. Applying effective communication skills of physician lead to more involvement role of patient in treatment process. This study has been done to assessment view point of general physician about communication skills & role of patient education in treatment process. Material & Method: This was a descriptive, cross –sectional study. Data were gathered using a scientifically validated questionnaire which contained closed questions that were focused on communication skills and educational behavior of physician relating to patient. The questionnaire was distributed among 100 general physician who participated in contineous medical education program (CME) Result showed: 85% general physician stated effective communication is very important in treatment process. 90% of general physician stated educationing patient leads in to more cooperation between the physician and patient for better following up ofthe treatment plan. Only 40% of general physician has been spent adequate time on patient education. The details of results would be presented to the conference.

 

The Impact of the Eighty Hour Work Week on The House Staff at a Large University Affiliated Community Based Teaching Hospital
Keywords: Resident Working Conditions
Authors: Best, K., Weiss, P., Koller, C., Hess, L.W.
Institution: Lehigh Valley Hospital Department of Obstetrics and Gynecology

Summary: Objective: To determine how the recently mandated eighty hour work week restriction affects the psycho-social well-being and clinical experience of ob/gyn, surgical, and internal medicine residents at Pennsylvania's largest community-based teaching hospital.

Methods: A questionnaire consisting of ten items, each scored on a five-point Likert scale, was distributed to upper year residents in the departments of ob/gyn, surgery, and internal medicine. The questionnaire addressed residents' perceptions of the psycho-social and clinical impact of the mandated eighty hour work week as well as their program's level of compliance. Resident participation in sentinel cases and/or procedures prior to and after the mandated hours was evaluated to determine the impact on clinical experience.

Results: Final results pending; however, preliminary data suggest that the ACGME work restrictions have positively impacted upon resident stress/fatigue and home life without compromising the quality of neither patient care nor patient safety. A small, but statistically non-significant impact on surgical and/or procedural experiences was noted.

Conclusions: Transitioning to the eighty hour work week prompted numerous concerns from house staff and faculty. Thus far, our data suggests that there is no negative impact on the quality of patient care. The data also shows a commitment to compliance with the mandated work restrictions despite the concerns.

 

Opinin of the students about a graduate clinical exam with real patient
Keywords: Clinical exam, exam with real exam
Authors: Ponce de León, M.
Institution: Universidad Nacional de México, Medical School

Summary: The school of Medicine of the National Autonomous University of Mexico graduates its students with an Integrated Clinical Exam. The exam is done in two sections: one theoretical and the other practical. The practical part is performed with the student and a real patient it's object is to certify the knowledge, abilities and attitudes of the student in the management of the case. The exam is performed in a hospital, at bed side and with three presiding physicians. It lasts 2.5 half hours. During which the students is assessed in his abilities to relate to the patient, to question him and to perform a physical exploration. Later just with the physicians they question him as to de diagnosis, interpretation of clinical studies, medication and the prognosis. Each assessing physician has a written guide to help him during the exam. The object of this study is to know the opinion of the students about how objective it was and if they considered it to assess their competence. Methods: a group of professors developed a Lickert type questionnaire with 34 items divided into 4 categories: organization and logistics of the assessment, characteristics of the patient, characteristics of the evaluating physicians and his own attitudes towards the evaluation. The questionnaire was applied to 280 students, as soon as they had finished the exam and before being informed of their grades. The exam was validated and obtained Cronbach Alfa of 0.8632 and a Kaise-Meyer-Olki sample of .859, a correlation range between .880 and .426

 

Evaluation of an intervention to improve teaching skills in case analysis of randomly selected cases
Keywords: teaching skills, case analysis, evaluation
Authors: Evans, A.; Ormston, B; Dunbar, A; Taylor, G
Institution: University of Leeds

Summary: Background: Random case analysis (RCA) is a commonly used one-to-one teaching technique in UK family medicine/general practice (GP), but video-tape review of teaching sessions demonstrated wide variation in teaching skills of GP trainers. In response to this, we undertook an action research project with the aim of helping trainers to improve their teaching skills. A second part of the project, not reported here, was to develop and validate a profile for assessment of RCA teaching skills.

The intervention: A teacher-training package was devised for prospective trainers, with three elements:

1) 2-hour session with short factual input, an opportunity to rehearse skills and receive feedback using each others' cases, followed by reflective discussion

2) video-taped live experience of teaching the GP registrar (trainee) of an experienced trainer, the latter acting as educational supervisor for the prospective trainer

3) follow up session to review the video-tape in a small group setting with an experienced facilitator.

Evaluation: Participant reaction was favourable. Observation of the teaching behaviour of eleven prospective trainers was compared with that of eleven trainers one year after appointment, who had teaching experience but had not had the specific training in RCA. One rater completed the assessment profile for all the tapes, and 5 tapes from each group were rated independently by a second person. The mean score was higher for the trained group, who were more learner-centred and used a wider range of teaching methods.

 

Evaluation instrument for clinical nursing training
Keywords: clinical nursing training, evaluation
Authors: Guitard Sein-Echaluce, Luisa; Subira Garrido, Alba; Grau Armengol, Teresa; Pedrol Aige, Teresa; Ribe Gracia, Anna; Taules Bravo, Yolanda
Institution: Escuela Universitaria de Enfermería. Universidad de Lleida España

Summary: Nursing students can't reach a good level of education if it's not by means of clinical experience, in which they apply the learnt knowledge during the theoretical teaching, they acquire essential skills and develop necessary attitudes for the profession. It's necessary to measure the level that each student reaches from this apprenticeship by means of an evaluation system. Evaluating is a difficult process and perhaps one of the main challenges people who teach have to face.

Objective: To elaborate an evaluation instrument for clinical training that could guarantee the evaluators objectivity and diminish the interevaluation differences.

Outcomes: For evaluating is essential to start from determined objectives, since they are going to be the reference of the evaluation. We can get a valuation from the comparison between the objectives and the reality. These objectives are related to the theoretical program of each course. An evaluation dossier has been made, and it has been divided into four sections: attitudes, knowledge and skills composed by techniques and registers. Each section has been detached into several items so that its evaluation could be detailed. The punctuation of each item varies depending on the course the student is taking: the attitudes are more important in the first course and kills are more important in the third course.

Conclusions: Using the same evaluation instrument for the clinical training period during the three courses of nursing studies allows a better valuation of the evolution and detects the exactly difficulty points.

 

A survey of Educational quality in the view of Medical student in Medical science University of Shiraz
Keywords: quality -medical student-need community
Authors:
Najafipour, SE.
Institution: medical university

Summary: A survey of Educational quality in the view of Medical student in Medical science University of Shiraz. Sedighe najafipour-feredon Azizi -mehdi Saber -Fatemhe Najafipour Study of Educational quality, emphasizing on methods of teaching and Educational levels is integral part of student's Educational. Purpose of this study is determining the view of pre clinic students and intern to quality of Educational in this courses. This is a descriptive study, on 117 pre clinic students and 107 intern. applying the valid and reliable questionnaire Some information About methods of theoretical and clinical Educational and adjustment of the Educational content to the needs of the society and also rate of student's participation in research works was studied. Results showed that 48 percent of pre clinic students and 70 percent of interns believed that Educational has been adjusted by society needs. 74 percent of pre clinic students' and30 percents of intern believed that lecture always is used in their Educational. 44 percents of interns believed that problem solving method are always used. 60 percent of interns believed that answer question method is used. 32 percent of pre clinic students and 37 percents of interns believed there has been possibility of individual research during Educational. 40 percents of pre clinic students and 25 percents of interns believed that the possibility of community research has been moderate. The details of results would be present to the conference.

 

"Developing Continuing Professional Development through Student Education
Keywords: Continuing Professional Development. Assessment.
Authors: Shann, S. Lowe, J.
Institution: Northumbria University

Summary: With health care professions worldwide recognising the need for a commitment to life long learning, it is essential for clinicians to demonstrate continuing professional development (CPD) through evidence-based practice. For health professionals actively involved in student education in the clinical setting a major commitment of both time and resources is required. Therefore it would appear logical to utilise the skills needed to educate students as evidence for CPD. The authors have developed a new and innovative method of evidencing practice through the use of models of reflection (Kolb 1984,Boud, Keogh and Walker 1985) and a competency based student assessment tool. The presentation will outline how the tool can provide a clear, concise framework on which to base student supervision in a dynamic manner, whilst highlighting the need for collaborative working, through uni and inter disciplinary teams. The presentation will then discuss how the use of this assessment tool will enable clinicians to reflect upon their own practice whilst identifying and facilitating areas of development for students.

References: Boud D, Keogh R, Walker D (eds) (1985) Reflection: Turning Experience into Learning. London. Kogan Page) Kolb D (1984) Experiential Learning: Experience as the Source of Learning and Development. Englewood Cliffs. Prentice-Hall.

 

International medical graduates in Australia: Assessment for hospital practice (2: A practical test of safety and competence)
Keywords: international medical graduates, assessment, patient safety, OSCE, simulated patients
Authors: Elliot S3, Conn J.3, Robertson K3, Dodds A3, McGrath B2, Kanaris A2, Nestel D1, Jolly B1, Graecen J4, Tiller J5, Dancer A5, Findlay D6, Flanagan B7, Harrison J7, Paltridge D8
Institution: 1. Centre for Medical & Health Sciences Education 2. Postgraduate Medical Council of Victoria 3. Faculty Education Unit, University of Melbourne 4. Rural Workforce Agency Victoria 5.Victorian Medical Postgraduate Foundation 6.General Practice Education Australia 7.Southern Health Simulation & Skills Centre 8. St Vincent's Hospital Human Simulator Centre

Summary: The assessment for hospital practice has been developed by a consortium of clinicians, educators, and administrators from professional organisations in Victoria, Australia. The government funded project was completed in September 2003 after recognition that there was a need for such assessments. 1 One phase of the assessment process is the Practical Test of Safety and Competence. This was designed as a 20 station Objective Structured Clinical Examination to assess candidates identified as borderline in either of the other phases: the Written Test or the Structured Interview. Skills were selected that were testable in simulated format, commonly performed, and potentially pose the most significant risk to patients. The blueprint reflected competence in eliciting a history, information-giving and communication with colleagues, physical examination and technical competence and safety in key procedural skills. The test was formally evaluated by both examiners and candidates and was subjected to statistical analysis and standard setting procedures to develop familiarity with their use in this setting. 19/20 of the stations were considered to appropriately meet the needs of this program with minor amendments needed to address the above issues. Another station was piloted but needed substantial revision. Reliability was acceptable, candidates thought it a fair test, examiner issues were highlighted and the cost was reasonable. Currently implementation of the test is in abeyance due to developments at federal level and the need to co-ordinate between states.

Reference: 1. Postgraduate Medical Council of Victoria (2001) AMC Candidates in the Victorian Public Hospital System. http://www.dhs.vic.gov.au/pdpd

 

Developing of a Competency Based Student Assessment Tool to Facilitate the Acquisition of Life long learning skills
Keywords: Reflection, Lifelong Learning. Competency. Learning Styles.
Authors: Lowe, J. Shann S.
Institution: Northumbria University

Summary: It has been suggested that the highly academic nature of undergraduate programmes may equip students with high levels of knowledge and the ability to reflect theoretically (Westcott and Rugg 2001) but does this adequately prepare the student for the workplace in terms of practical skills? The presentation aims to discuss the author's development of a competency based assessment tool. This method of assessment concurs with current UK government indicatives that encompass continuing professional development and embrace a lifelong learning culture. (Department for Education and Employment Green Paper 1998). It is envisaged that through the use of a competency based approach together with the self-identification of learning styles (Honey and Mumford) the students learning experience will be enhanced. In meeting competencies, students are encouraged to use reflection to integrate theory and practice, thus enabling them to evidence their learning on practice placement hence starting lifelong learning.

References:

1. Department for Education and Employment (1998) Green Paper: The Learning Age, A Renaissance for a New Britain. Department for Education and Employment. [http://www.lifelonglearning.co.uk/greenpaper/index.htm] 13/11/2003.

2. Honey P, Mumford A. (1992) The Manual of Learning Styles (3rd Ed). Maidenhead.

3. Westcott, L., Rugg, S (2001) The Computation of Fieldwork Achievement in Occupational Therapy Regress: Measuring a Minefield. British Journal of Occupational Therapy. 64 (11) p541-548.

 

International medical graduates in Australia: Assessment for hospital practice (4: The structured behavioural interview)
Keywords: international medical graduates, interviews, assessments
Authors: Kanaris A2, Flynn E3, Sutton B2, McGrath B2, Jolly B1, Jordon C2, Nestel D1, Elliot S3, Graecen J4, Tiller J5,
Dancer A5, Findlay D6.
Institution: 1. Centre for Medical & Health Sciences Education, Monash University 2. Postgraduate Medical Council of Victoria 3. Faculty Education Unit, Faculty
of Medicine, Dentistry & Health Sciences, University of Melbourne 4. Rural Workforce Agency Victoria 5. Victorian Medical Postgraduate Foundation 6. General Practice Education Australia

Summary: This paper describes the development of one component of a three part safe practice assessment process for international medical graduates entering public hospitals. The assessment process aims to minimise risk to patients and determine the need for further training and/or supervision. The structured interview is one stage of the assessment process. The context of the interview within the broader framework of the assessment process will be outlined prior to presenting the details of this component. The assessment process was developed by a consortium of academics, clinicians and administrators from professional organisations in Victoria, Australia led by the Postgraduate Medical Council and was funded by the Victorian State government. The first stage of the project was completed in September 2003. The main purpose of the interview is to explore the essential criteria for safe practice. The behavioural interview is based on the principle that the best predictor of future behaviour is past behaviour/performance in similar circumstances. The interview is carefully designed to systematically assess the applicant's medical training and knowledge, clinical skills and experience, and communication skills. We will describe the development and piloting of the interview process, the training of interviewers such that the behavioural interview can be conducted in a range of locations and in a timely manner, and discuss certain key aspects of assessment – standard setting and blueprinting, validity and reliability.

 

International medical graduates in Australia: Assessment for hospital practice (3: A written test of safety and competence)
Keywords: written assessment, international medical graduates, EMQs, MCQs
Authors: Jolly B1, McGrath B2, Jordon C2, Kanaris A2, Nestel D1, Elliot S3, Flynn E3, Graecen J4, Findlay D5.
Institution: 1. Centre for Medical & Health Sciences Education, Monash University 2. Postgraduate Medical Council of Victoria
3. Faculty Education Unit, University of Melbourne 4. Rural Workforce Agency Victoria 5. General Practice Education Australia

Summary: This paper describes the development of one component of a three part process in the assessment for hospital practice – a written test on safety and competence – designed for international medical graduates. The context of the written test within the broader framework of the assessment will be outlined prior to presenting the details of this component. The assessment process has been funded by our State government and was developed by a consortium of academics, clinicians and administrators from professional organisations in Victoria, Australia led by the Postgraduate Medical Council. The first stage of the project was completed in September 2003. The written test is a requisite component of the assessment process and is designed to identify safe and competent doctors in relation to knowledge. The test uses multiple choice and extended matching questions and the standard is set at the end of the first postgraduate medical year. The test is delivered online and can be delivered in several sites. We will describe the development of this test including the piloting together with highlighting key aspects of assessments – validity, reliability, standard setting and blueprinting and security.

 

International medical graduates in Australia: Assessment for hospital practice (1: Process and challenges for successful implementation)
Keywords: international medical graduates, assessment, competence
Authors: Jolly B1, McGrath B2, Kanaris A2, Nestel D1, Jordon C2, Elliot S3, Flynn E3, Graecen J4, Tiller J5,
Dancer A5, Findlay D6, Flanagan B7, Paltridge D8.
Institution: 1. Centre for Medical & Health Sciences Education 2. Postgraduate Medical Council of Victoria 3. Faculty Education Unit, University of Melbourne
4. Rural Workforce Agency Victoria 5. Victorian Medical Postgraduate Foundation 6. General Practice Education Australia 7. Southern Health Simulation & Skills Centre 8. St Vincent's Hospital Human Simulator Centre

Summary: The assessment for hospital practice has been developed by a consortium of clinicians, educators, and administrators from professional organisations in Victoria, Australia. The State government funded project was completed in September 2003 and led by the Postgraduate Medical Council of Victoria who had previously reported the need for assessment of medical knowledge, clinical and communication skills before international medical graduates commence employment in Victoria (2001). No other test in Australia was adaptable as none was blueprinted at the end of the first postgraduate year. The consortium was contracted to develop an assessment process that:

· Ensures a basic level of safe practice by all international medical graduates prior to taking up employment

· Enables a recommendation as to whether a more formal communication assessment is required

· Is valid, reliable, fair, transparent, defensible and timely

The objectives of the assessment are to:

· Screen for high risk practitioners so as to minimize the risk to the public

· Identify conditions of registration related to training, supervision and area of practice for each candidates

· Determine the need for more rigorous individual communication assessment. The assessment consists of 3 main parts:

1. Written test of safety and competence

2. Structured interview

3. Practical test of safety and competence

This paper outlines the development of the assessment process highlighting specific issues that emerged during the process, the costs and the challenges for successful implementation.

Reference: 1. Postgraduate Medical Council of Victoria (2001) AMC Candidates in the Victorian Public Hospital System. http://www.dhs.vic.gov.au/pdpd

 

The pharmacognosy post graduate core curriculum revision project in Iran
Keywords: curriculum revision, pharmacognosy, post graduate
Authors: Asghari, G.
Institution: School of Pharmacy

Summary: The poster provides an overview of the pharmacognosy post graduate core curriculum revision project. The aim of the project was to revise the post graduate pharmacognosy core curriculum for pharmacy post graduate program, first established in 1990, so that it would better reflect universities and research centers need. A further aim of the project was to address the educational needs of post graduate students. To achieve these aims a core curriculum revision meeting and workshop was organized and attended by representatives from most of the pharmacy schools and pharmacognosy board committee. A rough draft of the core curriculum was developed, circulated and subjected to further scrutiny and modification. The updated core curriculum was introduced to National Committee on Medical Education Planning and has now been in approval process.

 

Determining the effective factors one the educational achievement of the students of Jahrom medical university
Keywords: educational achievement
Authors: Sedighe najafipour -Noriachtar danesh - Fatemahe najafipour- Azam Najafipour
Institution: medical university

Summary: Determining the effective factors one the educational achievement of the students of Jahrom medical university. Sedighe najafipour -Noriachtar danesh - Fatemahe najafipour- Azam Najafipour Several factors may influence on educational achievement students Such as sex, social awareness, field of study, social and psychological problems , interest to their field of study. Deterring the effective variable on the educational achievement of the students is the goal of this study. This is a descriptive study has carried out in Jahrom 2001 academic year. questionnaire was used to gather data. consist of question about personal factor social factor and education factor. 200 student in two group successful and unsuccessful student participated in survey. Results indicated that educational success of girl students were significant more that boy students p=0.005 And there were a direct relationship between field of study and achievement in it. It means that students of medical were more successful than students of nursing.

 

Creating a Core Curriculum in Pain Management
Keywords: curriculum development, pain management
Authors: Ortwein, H.
Institution: Charité Medical School

Summary: The Regular Track Curriculum at Charité Medical School is a lecture and seminar based Curriculum. Besides these teaching activities bedside teaching is implemented. Students are assessed with multiple choice questionnaires and Objective Structured Clinical examination (OSCE). The new German requirements for Licensure to practice medicine force Medical Schools in Germany to review their Curricula. The planning process is highly regulated and requires the different disciplines to assess their students regarding knowledge and clinical skills. The new Curriculum at Charité Medical School is the first of ist kind in Germany including a Core Curriculum in Pain Management. As an interdisciplinary field Pain Management had not been mentioned in the law as a part of a Core Curriculum. Therefore at our institution all disciplines involved in the field of Pain Management (Anesthesiology, Neurology, Orthopedia, Physical Medicine, Psychosomatic Medicine) started a successful task force to develop and implement an interdisciplinary course for Pain Management integrated within the teaching of the different disciplines. Besides lectures teaching tools will be seminars, bedside teaching and Standardized Patient contact. Assessment will be carried out using MC testing and OSCE`s. This poster will describe the interdisciplinary planning process, Curriculum content and Evaluation methods.

 

Sir James Paget: Founding father of research in medical education
Keywords: Sir James Paget; medical education; research; history
Authors: McManus, Chris.
Institution: University College London

Summary: The name of Sir James Paget (1814-1899) is known to all doctors and many patients for his description of what is now known as Paget's Disease of Bone. Less well known is that he was an innovator in medical education, and that he carried out the first large-scale outcome study of medical training. In his 1869 paper, "What becomes of medical students?" Paget, J. 1869, "What becomes of medical students" (Saint Bartholomew's Hospital Reports, 5: 238-242), Paget followed up over 1000 students whom he had taught at St. Bartholomew's Hospital between 1839 and 1859, classifying their professional careers into six categories ('distinguished success', 'considerable success', 'fair success', 'very limited success', 'failed entirely' and 'left the profession'). He also speculated on the psychological reasons for success and failure. Paget's analysis was based on the notes he made in the 'entry-book' he kept in which students signed in for his courses at St Barthomolew's Hospital. The entry-book is preserved in the library of the Royal College of Surgeons of England, and I will describe analyses of the detailed entries, as well as further follow-ups of some of the students described by Paget. In this paper I will argue that Paget's work has been unduly neglected in the history of medical education, that it was well ahead of its time, that it was many decades before there was any equivalent study, and that his paper marks the onset of a modern, statistical analysis of the effects of medical training, assessing the eventual professional outcome of students and based on evidence.

 

Model for outcome-based evaluation of instructional effectiveness with different cohorts
Keywords: program evaluation, outcome-based evaluation, effectiveness
Authors: Pachev, G., Shah,A., Lara-Guerra, H., Koval, V., & Quayumi, K.
Institution: University of British Columbia

Summary: Objective: The poster presents a model for evaluating effectiveness based on outcome data from different student cohorts.

Rationale: Many educational innovations are developed with the ultimate goal to improve the outcomes of the instructional process. Once developed, however, they are often implemented to the education of new student cohorts without preliminary evaluation of the innovation's effectiveness. This limits evaluation to satisfaction measures and indirect evidence for the advantages of the new instructional method. The proposed model specifies a protocol for the evaluation process with several steps. At each step, several options are considered pertaining to: control for subjects differences, control for context differences, elimination of alternative explanations, elimination of other threats to internal validity.

Method: The choice among options at each step is illustrated by applying the model to the outcome-based evaluation of effectiveness of a computer-assisted-instruction module for abdominal examination training.

Results: The results are discussed in terms of the informational value for evaluation decisions of the alternative paths at each step of the model.

 

Orientation to assessments: A transition OSCE for first year medical students
Keywords: assessments, orietnation, transition, OSCE
Authors: Halley E, Nestel D
Institution: Centre for Medical & Health Sciences Education, Monash University

Summary: At Monash University, all first year medical students attend a weekend residential transition camp that introduces students to the curriculum including teaching, learning and assessment. Given that assessments are a source of anxiety, we wanted to provide a way of engaging students in the process of assessments before having to deal with content. We developed a session emphasising different domains (knowledge, attitudes and skills) targeted in assessments. The session format included a didactic presentation, objective structured clinical examinations (OSCEs) and written assessments. The content of these assessments are unrelated to medicine but aim to help students appreciate the rationale for their use. The session finishes with a reflection and discussion of key elements of assessments in medical education. In developing the session, one challenge was to identify activities that new students from a variety of backgrounds could complete without feeling stressed or uncomfortable, while at the same time reflecting the kinds of stations they would encounter in an OSCE. We created a 5 station transition OSCE. At the end of the session, students reflected on their experiences, discussed the assessment processes, their expectations and how this experience may influence preparation for future assessments. Students (n=180) used a 5-point scale to rate the helpfulness of the session. The mean score was 4.2 (SD=0.9, range 3 – 5). Free response comments indicated that the session was useful in introducing new assessment methods and that students were able to reflect on the reasons for and how they might prepare for these assessments.

 

Successful use of senior medical students as examiners in an objective structured clinical examination
Keywords: OSCE, assessment, medical student
Authors: Amaral, F.
Institution: UNAERP-USP

Summary: Successful use of senior medical students as examiners in an objective structured clinical examination. Fernando TV Amaral 1, 2 and Luiz EA Troncon 2. 1. University of Ribeirão Preto Medical School (UNAERP); 2. Faculty of Medicine of Ribeirão Preto, University of São Paulo (FMRP-USP), Ribeirão Preto, State of São Paulo, Brazil. Participation of medical students in several teaching and assessment activities has been increasingly stimulated. Nevertheless, the use of students as examiners in objective tests of clinical competence has not been extensively documented. We evaluated wheter final year medical students could function as reliable examiners in an Objective Structured Clinical Examination (OSCE) of juniors students. Six sixth year medical students, selected on the basis of their interest in medical education, acted as examiners in stations for the assessment of history-taking, physical examination and comunication skills in a 6-station OSCE for 59 3rd year medical students. Each student examiner was paired to an experienced staff member and both were blinded to each other's marking on checklists covering 67 clinical tasks. Analysis of paired results showed that there were no significant differences between staff members and students examiners in all but one station, in which students examiners markers were significantly higher. In one of the three physical examination stations a significant difference in marks was detected. We conclude that senior medical students can apparently be utilized as reliable examiners in an OSCE for basic clinical skills of junior students. Despite some differences in marks may appear, as here detected, this strategy of using students as examiners may contribute to increase feasibility and reduce costs of objective examinations of clinical skills.

 

Maintenance of clinical skills by medical students. A cohort study
Keywords: OSCE, assessment, medical student
Authors: Amaral, F.
Institution: UNAERP-USP

Summary: Maintenance of clinical skills by medical students. A cohort study. Fernando TV Amaral 1, 2 and Luiz EA Troncon 2 1. University of Ribeirão Preto Medical School (UNAERP); 2. Faculty of Medicine of Ribeirão Preto, University of São Paulo (FMRP-USP), Ribeirão Preto, State of São Paulo, Brazil

Despite early clinical training has been increasingly stimulated in undergraduate medical education, little is known about the capacity of the student to maintain clinical skills. We studied 2 cohorts (G1 and G2) of fourth year medical students, who were submitted to an objective, structured clinical examination (OSCE) in basic skills in Cardiology. Six months later they were assessed using the same OSCE, on a voluntary basis. Stations lasted 7 minutes each, during which staff members used a structured checklist to score the results. In G1 (n=21), performance was very good in physical examination skills, good in EKG and chest X-ray analysis and average in communication skills. Performance in history-taking skills was weak, but this showed a significant improvement in the 2nd assessment. In G2 (n=50), performance was good in physical examination skills and EKG analysis, average in communication skills and weak in history taking and X-ray analysis skills. No significant improvement was found in the 2nd assessment. We conclude that medical students tend to maintain performance in clinical skills that are more fully mastered, whereas improvement in areas of weaknesses is inconstant. Periodic assessment of clinical skills is useful in order to improve standards concerning student clinical training.

 

The Use of On-Line Formative Assessments to Enrich Learning in an Integrated Medical School Curriculum
Keywords: assessment, medical education, formative assessment
Authors: Krasne, S., Relan, A., Fung, C-C., and Drake, T.A.
Institution: David Geffen School of Medicine, University of California, Los Angeles

Summary: This study examined effects of weekly, formative assessments on achievement, learning and perceptions in an integrated, medical school curriculum. Of the 146 entering medical students, 110 volunteered to participate in the study. Seven required and one optional formative assessments, corresponding to weekly curricular "themes", were delivered on-line over the eight-week curricular block. Each assessment consisted of two parts: a timed, closed-book component and an un-timed, open-book/resource component. The goals of the formative assessments were to provide feedback that would 1) allow students to monitor the focus and depth of their learning; 2) allow students to become familiar with the style of the summative assessment implemented at the end of the block; 3) identify struggling students early in order to recommend appropriate remediation; and 3) identify areas of weakness in the curriculum that needed intervention and improvement. A Formative Assessments Perceptions Survey (FAPS) was administered following the curricular block. Student performance showed significant improvement over the seven required formative assessments based upon a repeated-measures ANOVA performed on mean scores (F = 17728.829, p<.01). There was an increase in mean score of the formative assessments and the summative assessment (t = 7.862, p<.001). Students overwhelmingly favored having the formative assessments. Based upon analysis of student performance and the FAPS, formative assessments appear to contribute to overall achievement, promote a proactive approach to learning, and offer psychological and cognitive scaffolding needed for improved learning and performance.

 

M.D.
Keywords: Teaching Scholarship, Faculty Recognition
Authors: Wolpaw, D., Wolpaw, T.
Institution: Case School of Medicine (Case Western Reserve University School of Medicine)

Summary: Traditional approaches to recognizing contributions to medical education are largely dependent on learners and subject to popularity and exposure bias, impacting only a small percentage of our teachers. With the goal of a process that would be inclusive, broadly applicable, transparent, and academically rigorous, we set out to address the challenge of faculty recognition for education in three steps: 1) Track faculty effort in medical education through an electronic summary, 2) Ask faculty to describe a recent educational effort in a 1-2 page "Best Contribution" narrative, 3) Subject these narratives to an academically rigorous peer review process that serves as the basis for recognition awards. This program is designed to evaluate scholarship and quality in the various products of educational effort, rather than take on the complex and ultimately subjective challenge of fairly evaluating the quality of the teachers themselves. It is expected that the impact of this program will be seen in four ways: 1) Enhancing the profile of education and educators 2) Opening up the classroom for better communication on new and/or successful ideas, 3) Creating a straightforward template for teaching recognition that can be easily translated across institutions, and 4) Establishing a broad-based peer review network for educational ideas and products. Program evaluation includes tracking submissions, peer-review scores, and subsequent publications, as well as surveying attitudes of applicants, non-applicants, and members of the promotions and tenure committee to assess impact and changes in institutional culture.

 

Starting Work - Ready or not? Views of commencing medical interns on the skills developed during their undergraduate program
Keywords: curriculum evaluation, undergraduate medicine, graduate skills
Authors: Lindley, J.; Liddell, M.
Institution: Monash University

Summary: Decisions about the quality of medical education rely, in part, upon the performance of new graduates in their roles as beginning doctors. The success of the course in preparing medical graduates is dependent upon graduates being equipped with the necessary knowledge, skills, attitudes and professional behaviours. As the practice of medicine requires the application of knowledge and skills in a clinical setting embedded within a social context, graduates must be capable managers of health care across a complex range of situations. To evaluate graduate outcomes the Faculty of Medicine at Monash has collected data from two consecutive cohorts of graduates during their first year as medical practitioners in the hospital system. The second cohort had undertaken a final year program that was significantly revised compared to that undertaken by the first cohort. The project gathered graduates' views on the success of their undergraduate course in preparing them for the demands of the medical workplace. Responses were sought on a range of vocational skills comprising clinical tasks, procedural techniques and professional relationships. Data from the surveys was analysed and results for clinical tasks, practical skills and professional relationships revealed some differences between the cohorts with students from the second cohort indicating that they perceived themselves to be slightly better prepared than their counterparts in the previous cohort. Data analysis also allowed identification of specific areas for curriculum review.

 

Influence of the APLS and PALS courses on self-efficacy in paediatric resuscitation
Keywords: APLS, PALS, self-efficacy, paediatric, resuscitation
Authors: Turner, N.M. Paediatric Anaesthesiologist; Dierselhuis, M.P., Final year Medical Student; Draaisma, J.Th.M., Paediatrician ten Cate, Th.J., Professor in Medical Education Institution Wilhelmina Children's Hospital and Faculty of Medicine, University Medical Centre, Utrecht, and St Radboud Medical Centre, Nijmegen, The Netherlands

Summary: Introduction: Most life support courses recognise that performance during resuscitation depends partly on attitudinal factors1. The current study was designed to assess the effect of following either the Advanced Paediatric Life Support (APLS) or the Pediatric Life Support (PALS) course on the learners' self-efficacy in respect of six psychomotor skills. Global self-efficacy at paediatric resuscitation was also measured.

Methods: All candidates attending the courses were sent an anonymous questionnaire before the course and three and six months later. They were asked: 1) to rate their self-confidence in respect of the six skills and globally using a 100 mm visual analogue scale; 2) to estimate the frequency of performance of the skills; 3) to nominate two direct colleagues with a similar level of experience who did not intend to follow either of the courses.

Results: Preliminary results suggest that attending the courses does lead to increased self-efficacy both globally and in respect of defibrillation, insertion of an intraosseous device and umbilical vein catheterisation. Prior to the course, candidates appear to have less self-confidence about intubation and defibrillation than their colleagues who choose not to follow the course. See graph

Discussion: Although this study makes use of a new method of measuring self-efficacy, and despite the fact that the relationship between self-efficacy and performance is variable2, we cautiously conclude that the APLS and PALS-courses seem to have a positive affective effect on the candidates which might be associated with improved performance of paediatric resuscitation.

References

1) Carley S, Driscoll P, Trauma education, Resuscitation 48 (2001) 47-56.

2) Morgan PJ , Cleave-Hogg D, Comparison between medical students' experience, confidence and competence. Medical Education 36 (2002) p 534-539.

 

 

Learning Portfolios in Undergraduate Medicine
Keywords: learning portfolios, developmental tools, educational and training needs
Authors: Brigden, D.
Institution: University of Liverpool / NHSE (Mersey Deanery)

Summary: This poster presentation will aim to put the case for the use of learning portfolios as a developmental tool for medical undergraduates, helping them to identify their educational and developmental needs as well as recording their successes. It will offer advice on how to construct a portfolio, the importance of reflection in this process and its role in appraisal and assessment. (Jan islei and Claire Lane are 3rd year medical students at the University of Liverpool)

 

Towards the promotion of quality in Medical Education at the Faculty of Medicine of the University of Porto (FMUP): Connecting the Evaluation Process with the Proposal of an Innovative Curriculum
Keywords: Evaluation, Curriculum
Authors: Tavares, M.A.F., Bastos, A., Sousa-Pinto, A.
Institution: Faculty of Medicine University of Porto and School of High Education, Politechnic Institute Viana do Castelo

Summary: From 1998, the medical course of the FMUP was evaluated under several institutional initiatives, all of them within the scope of quality programs in higher education. As part of these programs, the CNAVES (National Council for Higher Education Evaluation) provided the guidelines for a new evaluation process of the medical course, during the academic year 2002-2003. The answer to this request triggered a dynamic process in FMUP involving the whole institution, being performed as a developmental evaluation. The results obtained in resources, administration, education and research, allowed to draw a developmental strategic view of FMUP. Evaluation of the curriculum provided a set of strengthnesses and weaknesses that reinforced the urgent need to reform the curriculum content and integration of subjects, merging basics with a clinical view from the beginning of the medical course, enhancing the clinical component and introducing optional modules. Within the development of a quality program, in the same academic year, the Curriculum Committee of FMUP started to design the new curriculum. The basic structure of the emerging proposal resulted on a core curriculum with study optional modules, providing vertical integration within a system-organization model and horizontal integration within a theme/subject organization. This model will overcome the weaknesses demonstrated in the different evaluation processes of the course, supporting and enhancing the strengthnesses of the Institution. The present work will describe the process of developmental evaluation settled at the FMUP and the central guidelines that will provide the foundation of the new curriculum (Supported by FMUP).

 

Students; perceptions of learner-centered, small group seminars on medical interview
Keywords: learner-centered method,medical interview, undergraduate education,video-tape review
Authors: Saiki, T. Mukohara, K. Abe, K. Ban,N.
Institution: Nagoya University Hospital

Summary: Background: Experts in medical education recommend learner-centered instructional methods. We utilized such an experiential, interactive method for a two-day, small group seminar on medical interview and communication skills for students at the Nagoya University School of Medicine. It was part of a 1-week clerkship rotation at the Department of General Medicine.

Purpose: To describe the perceptions of medical students of the learner-centered, interactive, small group seminar for medical interview and communication skills.

Methods: A 10-item questionnaire was administered to a total of 101 students who participated in the seminar throughout the academic year April-2003 to March-2004. The questionnaire items were related to the process of a learner-centered educational method and included a global assessment of satisfaction with the seminar. Each item was rated on a 4-point scale labeled as unsatisfied, somewhat unsatisfied, somewhat satisfied, and satisfied. The proportions of students who were satisfied were calculated for each item.

Results: Seventy-six percent of students were satisfied with the seminar overall. Among the other 9 items, engaging all students in discussion was rated the highest (80% satisfied). The items concerning structuring the seminar in logical sequence and managing time well were rated the lowest (39% and 42% satisfied, respectively).

Conclusion: The learner-centered seminar on medical interviewing was well received by students, especially for its interactive methods. Items that reflect more teacher-centeredness such as structuring the seminar in logical sequence and managing time well received lower satisfaction ratings.

 

Formal education in the early years of postgraduate training: has the pendulum swung too far?
Keywords: formal, informal, experiential, work-based, supervision,
Authors: Agius, S J.; Willis, S; Mcardle, P; O'Neill, P
Institution: University of Manchester

Summary: Formal education in the early years of postgraduate training: has the pendulum swung too far?

Background: The relationship between hospital consultants and doctors in training is set to experience yet further transformation with a Government-instigated modernisation process in postgraduate medical education.

Method: The University of Manchester has conducted a qualitative study of the culture of medical education in the SHO grade, based on interviews with 60 clinicians and educational leaders. These were recorded, transcribed and subjected to content analysis. For this study, data was coded to determine perceptions of formal and informal education.

Results: Within hospital-based communities of practice in medical education, the centrality of the relationship between consultant and doctor in training remains undiminished. The educational experience of a doctor in training depends largely upon the consultant(s) to which (s)he is assigned. There is a common perception that too much emphasis is being placed on formal education, to the detriment of work-based experiential learning.

Discussion: There is a perception that the early years of postgraduate medical training have altered as a result of external variables (reduced hours, shift systems). There is a consequent sense of loss at the reduction in contact between trainer and trainee, compounded by a belief that education is increasingly dislocated from the work-place through the use of formal classroom-based techniques. If the Government's new model of training is to work, then education should be located firmly in the work-place, within a formalised structure that makes learning explicit and foregrounds the importance of supervision and feedback. This will assist in retaining consultant commitment to the educative role, reducing the sense of conflict between service and training, whilst providing an effective means for the doctor in training to harness the requisite knowledge, skills and attitudes as an itinerant learner within a coherent structure.

 

Geriatrics OSCE: 4 first editions in Catalonia
Keywords: Geriatrics, clinical competence, assessment, OSCE
Authors: Arnau J*, Gràcia L*, Altimir S**, Miralles R**, Vázquez O**, Cervera AM**, Blay C*,Martínez-Carretero JM*
Institution: Institute of Health Studies * Catalan-Balearic Society of Geriatrics **

Summary: The Institute of Health Studies, has been working for the last eleven years to introduce OSCE as an assessment tool to evaluate professional competences in different medical areas. In the last years, the Institute and the Catalan-Balearic Society of Geriatrics have worked in a new medical area:

GERIATRICS. Two pilot OSCE editions for geriatricians were carried out in order to strengthen the assessment tool design. Thereafter, 2 more editions have been administered in the first semester of 2004. Through this 4 editions 67 physicians, exercising as geriatricians, have been evaluated. The mean global scores for the whole participants at all editions were not high , around 50%. Steps must be taken to evaluate both the OSCE design and the foreseeable professional weaknesses of that medical group. As a provisional conclusion, we can state that the four first OSCEs in Geriatrics carried out in Catalonia seem to prove that OSCE methodology is a valid, feasible and satisfactorily accepted instrument to assess professional competences of Geriatricians working in our country

 

Tracking the Professional Socialisation of Beginning Undergraduate Midwifery Students
Keywords: professional socialization, reflective practice, e-portfolio
Authors: Lawson, M.; McKenna, L.; McIntyre, M.
Institution: Monash University

Summary: Major changes have been introduced to the educational preparation of midwives in Australia over the past three years. This study explores the impact of these educational changes on the professional socialization of midwifery students. Traditionally, midwifery was offered as a postgraduate award for Registered Nurses. Whilst, many countries have had direct entry midwifery programs for many years (therefore not requiring midwifery students to be qualified nurses) Australia has only introduced this training route recently. In the State of Victoria, the first students entered Bachelor of Midwifery programs in 2002, with Monash University commencing with its first student intake of 25 students in 2003. The implications of this alteration are that midwifery students do not bring with them well-developed foundational skills from previous nursing experience. For health care agencies hosting students on clinical placements, adjustments to expectations of students may be required. Furthermore, in most cases students have had no previous socialisation into health care settings. This study was designed to explore the perceptions and experiences of students and their socialisation into midwifery care. The study utilises an electronic portfolio (e-portfolio) that allows students to reflect individually, as well as through guided questions, on their experiences throughout their course. At set time points on their programme students are set a number of structured tasks and are asked to identify critical incidents to map their socialization and to model reflective practice. The web-based format of the e-portfolio has been provided to encourage task completion at the time and place of the events.

 

Self audit as an educational tool: tutors first
Keywords: Self audit, tutors, family and community medicine
Authors: Ezquerra M, Avellana E, Calvet S, Morera C, Tamayo C, Vila Mª A.
Institution: Teaching Units of Family Medicine Residency Programme of Catalonia / Institute of Health Studies

Summary: Objective: To develop a self audit (SA) is included in clinical profile of accreditation/reaccreditation criteria of catalan tutors of family medicine residents since July 2001. Giving feed back on their own SA is the main purpose at this first stage. The specific initial objective of this project is just to know SA quality.

SETTING: Tutors of Teaching Units of Family Medicine residency programme

Methodology: All SA have been assessed since July 2001 till July 2004 by a group of 6 expert people in peers taking into account adapted criteria of West of Scotland Committee in General Practice.

Results: Up to now 184 SA have been evaluated. The most frequent topics presented were: Diabetes 23.5% and Hypertension 11.5%. Statement question selected was adequate in 84.6% of cases. Criteria were well structured in 29.6% but could be improved in 49.1% of them. Mean number of criteria was 5. Sampling was adequate in 89.9% with 20 patients as a the mean. Chosen methodology is adequate in 56.6%, interpretation of results in 66.1%, improvement proposals in 46.6% and in 66.7% for bibliography. Global results show that 39.2% are adequate and 48.1% improvable.

Conclusions: SA methodology in it its first stage, more efforts to improve it should be done by tutors in order to implement it amongst residents as an educational tool. Criteria are not adequately built in a high percentage of cases. Improvement proposal must be more relevant. The most frequent topics are those most prevalent and most audited for other reasons.

 

Educational progress of daily and evening students in medical records
Keywords: educational progress- daily students-evening students-medical record
Authors: Arabzadeh, A. Khudayar, F.
Institution: ahwaz medical university

Summary: comparing the educational progress has considrable position in educational program.as researchers have tried to find out the quility of educationalprogress in two groups ofof learners with equal teaching program and the same educational syllables, for example a significant differencce been noticed between the educational progress of single parent students and other students with normal condition by this is meant that the average total number of latter has been more than the former,s. in another research, no significant difference was shown between the daily and evening student (in iran, these are option in academic learning"daily"and "evening"). on the part of anxiety creteria, but it was noticed that in the process of time .in middle of the term ,the anxiety was increased in evening students.the students regestered in evening nursingcourses despite having more working hours and concequently more stress could gain higher score in courses.in overall, these has beenan attempt to compare the educational progress of daily and evening students in medical record.both had the same syllables and the age range, in some basic courses there was significant difference while in some specific courses no significant difference was noticed.according to the reports, due to the fatigue afternoon or evening hourse. the total number of evening students was less than daily students. the evening students had no interest in learning in last hours of the day. with the same syllables, tutors and have to pay fee while the daily students are charge by the goverment.

 

A Formal Remediation Programme for Medical Students Failing the Clinical Assessment at Their Graduating Examination
Keywords: Formal remediation, pastoral support
Authors: Feather A, Hayes K.
Institution: St George's Hospital Medical School, Cranmer Terrace, SW17 0RE

Summary: Q.What do you do with students who fail finals?

A. Let them take it again&#8230;and again.

There is little in the medical education literature on formal remediation programmes supporting academically underachieving students. We describe an intensive ten- session programme for students failing their Final MBBS clinical examination. Prior to this programmes inception there was little in the way of formal remediation offered to these students and they often felt isolated and disillusioned. The programme takes a surface approach and does not seek to identify learning pathologies or styles. Instead it concentrates on offering a supportive role for perceived and identified areas of weakness and as well as offering pastoral care and support. Our programme has had several predictable and some less predictable effects.

(1) The re-failure rate of students has been reduced.

(2) Students report increased motivation, self- worth and re-enthusiasm and for the course and their careers.

(3) Working as a small group rather than individually has provided peer support and reduced isolation.

(4) Involvement of numerous clinical and non-clinical staff has led to a greater recognition of the importance of such programmes throughout the curriculum for both students and staff. We hope that through the support of curriculum planners we may incorporate similar programmes around all major summative assessments and that all underachieving students get the extra support and care that they warrant. Engagement of failing students and increased staff awareness and interest in the reasons for academic failure may also be helpful for its future prevention.

 

Which factors are associated with the evaluation of a post-graduate course in public health?
Keywords: evaluation, public health, post-graduate course
Authors: Revuelta Muñoz, E., Farreny Blasi, M, Godoy Garcia, P
Institution: Institut Català de la Salut

Summary: Introduction. Evaluating postgraduate courses is essential for increasing their quality and adapting them to the needs of students. The objective of this study was to analyse whether the student-related characteristics have an influence on their evaluation of post-graduate courses.

Methods. The population of the study was 70 students from the "Diplomado en Sanidad" a post-graduate course in Public Health held in Lleida (Spain) from 2001 to 2003. This course was organised in 8 modules: "Introduction to Public Health", "Statistics", Transmitted Diseases", "Protocols in Cronic Diseases", "Health Protection" (HP), "Epidemiology", "EpiInfo", and "Research Methodology " (RM) The first 4 modules were theoretical and the other 4 had a practical approach. Independent study variables were: student profession, gender and age. The dependent variable was the global evaluation of each module. The information was obtained from self-administered questionnaire. The question related to the dependent variable was "Do this course generally meets your needs?". It was scored between 1 ("total disagreement") and 5 ("total agreement"). Each variable was characterized with a mean and its standard error. The relationship between the dependent and independent variables was studied using an ANOVA test with a p value < 0.05.

Results. The students' evaluation of the modules ranged between 3.2 for Statistics and 4.2 for RM, with significant differences (p<0.001). Epidemiology, EpiInfo and HP were also significantly well-valued. We did not detect any significant differences for age and gender.

Conclusions. Modules with a more practical approach receive the best evaluations and greatest acceptation, independent of student profile. We should therefore adapt a more practical approach in our lectures.

 

Structured Communication Adolescent Guide (SCAG): Extension of Reliability and Validity to Residents and Physicians
Keywords: communication, adolescent, simulated patient, focus group
Authors: Blake, K.
Institution: IWK Health Centre

Summary: Background: The Structured Communication Adolescent Guide (SCAG) was developed to facilitate standardized patients (SP) feedback to the medical students on their interviewing abilities with adolescents.

Purpose: To explore reliability and validity of the SCAG with physicians and residents.

Method 1: Two adolescents (age 15) were trained as SP's and participated in eighteen videotaped interviews conducted by physicians and residents. The adolescents used the SCAG to score the interviews immediately and re-scored a videotape of the same interviews one month later. Another adolescent and a gold-standard rater also scored the same videotaped interviews independently. Method 2: A focus group was conducted with 20 adolescents to discuss vocabulary and content of the SCAG.

Results: The SP adolescent scoring the SCAG after her live interview produced the highest scores. No significant differences were evident amongst the SCAG scores from the videotaped interviews. The adolescent focus group resulted in vocabulary and content changes to the SCAG.

Reference: A Structured Communication Adolescent Guide (SCAG): Assessment of Reliability and Validity. Medical Education (submitted 2004.)

 

Catalan OSCE in peadiatrics, 2002
Keywords: OSCE, paedriatrics, assessment
Authors: Descarrega-Queralt, R.; Ros, E.; Rivera, P.; Monzón, MC.; Van Esso, D.; Molina, V.; Rodrigo, C.; Pintos, G.; Moraga, F.; Edo, A.; Luaces, C.; Verdaguer, J.; Julià, X.
Institution: Institut d'Estudis de la Salut. Societat Catalana de Pediatria

Summary: In 2002 the Institute of Health Studies and the Catalan Society of Paediatrics jointly administered 2 editions of the Paediatrics OSCE. A 28 station OSCE (15 cases) was designed for both open test (february and october 2002). A total of 34 paediatrician (19+15) took the test. Standardized patients, manikins, pictorials, written cases and short answer open-ended questions were combined to assess the candidates.

Main results:

Febreaury 2002 October 2002
Mean SD ean Sd
TOTAL 64.96 6.5 65.83 6.8
History taking
62.6 10.1 69.7 8.1
Physical examination
63.3 5.1 61.9 8.6
Doctor patient-communication
73.8 6.0 76.2 10.5
Technical skills
65.8 24.4 53.4 24.6
Management procedures
62.9 9.1 63.6 8.3
Preventive care
61.0 10.8 64.0 12.6
Inter-professional relations
63.7 17.5 69.6 12.1

Reability using the alfa-Cronbach test was 0.72 for the first test administration and 0.74 for the second.

 

Selecting Interviewers for OB/GYN Residency Applicants: Getting the Most Bang for the Lost Buck
Keywords: residency improvement, interview
Authors: Pablo C. Argeles, MD, MPH; Patrice M Weiss, MD, Craig A. Koller, BS, Kerry Meagher, Thomas Wasser, PhD, L. Wayne Hess, MD, rgeles, p.
Institution: Lehigh Valley Hospital, Department of Obstetrics and Gynecology, Allentown Pennsylvania, USA, P.O. Box 7017, 18105-7017

Summary: Background: Approximately 250 potential revenue producing hours are devoted to the interview process each year at the Department of OB/GYN at Lehigh Valley Hospital. With increasing financial pressures, it has become important to analyze the effectiveness of the interviewing physicians to maximize match outcomes while minimizing expenses.

Objective: To evaluate whether there is a specific group(s) of interviewers who can successfully determine which residency candidates will match and which will not.

Methods: Each candidate undergoes four to five interviews and a post interview score is subjectively assigned by each interviewer. Interviewers include the department chairperson, the program director, core teaching faculty, and third year residents. The score is based on the impression of the candidate compared with past experience, attitude, maturity assessment, program compatibility, communication skills, and problem solving skills. Six years of applicants' post interview scores were evaluated and the scores of each interviewer group were analyzed to determine who best predicted match outcomes.

Results: The chairperson's mean post interview score for matching and non-matching interviewees were 93.1 and 78.2 respectively (p=0.037). No other interviewer group reached statistical significance: faculty (p=.931), program director (p=.291) third year residents (p=.167).

Conclusion: The department chairperson's evaluations are statistically significant in determining match outcomes. The interview process could be restructured to maximize interviewee contact with the department chairperson and reduce the number of interviewers, particularly core teaching faculty, thus leading to less lost revenue producing hours.

 

The Austrian GP Licensing Examination - An Analysis of Metadata and a Discussion of Possible Consequences
Keywords: General practitioner, licensing examination, modified essay questions, assessment methods
Authors: Thomas Link, Michael Schmidts, Martin Lischka
Institution: Institute for Medical Education, Medical University of Vienna

Summary: The Austrian GP Licensing Examination consists of a set of paper cases and modified essay questions. There are 3 examinations a year. In our presentation we will give a general overview of this examination and an analysis of metadata, which is available for the years 2001-2003. In this period, 2061 candidates took this exam. Cases vignettes and questions can be grouped according to medical competencies, the problem's "chronological dynamics", and the affected area. Questions concerning history taking (difficulty p=0.51) are consistently the most difficult ones, whereas questions about urgent procedures (p=0.66) are the easiest ones. Cases that describe emergency situations (p=0.67) are easier than cases with acute (p=0.59) or chronic (p=0.60) diseases. Ophthalmologic (p=0.46) or psychiatric (p=0.54) cases are typically more difficult than cases about accidents (p=0.69) or gynecological ailments (p=0.69). These differences could be explained with: (1) inherently varying difficulty levels of different medical areas or competencies; (2) a bias of the assessment method; (3) their different importance in general practitioners' training. The candidates' z-standardized scores differ significantly according to their age (r=-0.42) and to a small degree according to their gender (eta=0.18, male=-0.17, female=+0.17). Regional differences in the candidates' scores between the capital (-0.32) and the rest of Austria (0.09) can to some extent be explained with the candidates' age (Vienna: 35.07, Other: 32.33). An explanation for the peculiarity of the Viennese situation could be twofold: (1) the training situation is worse; (2) becoming a general practitioner has less priority.

 

Development of a Multiple Choice Instrument to Assess Characteristics of Candidates for Admission to an Undergraduate Pharmacy Degree Program
Keywords: professional programs, admissions test, non-academic traits, candidate selection
Authors: Lavack, L. and Braha, R.
Institution: Leslie Dan Faculty of Pharmacy, University of Toronto

Summary: What to assess: The content domain was determined through compilation of data from key pharmacy professional and academic documents. An extensive list of characteristics was collapsed into nine positive and nine negative broad non-academic characteristics domains. The results of a validation survey of key stakeholders re-affirmed relevance and importance of characteristics. Instrument development: A pool of items/questions was generated for each characteristic. Validity scales were developed for use in a multiple-choice format questionnaire. Sequential field tests investigated the psychometric performance and qualities of the items and instrument. Refinements continued until acceptable psychometric performance standards were met. The instrument achieved or exceeded all relevant psychometric standards in subsequent field tests and was used in a spring 2003 admissions cycle. Validity and standard setting: Extensive analyses were completed to ensure the instrument and cut-scores were reliable and valid for the purpose of selecting applicants for further consideration. After confirming internal validity of the instrument, a combination cut-score was determined. A minimum threshold for overall score, as well as a minimum level on each of the relevant positive and negative characteristics identified a subset with the most positive and least negative characteristics. Conclusions: The instrument displayed strong psychometric properties: excellent item characteristics, reliability, difficulty and discrimination. It displayed ease of administration, scoring, and ability to select applicants who displayed desirable, in the absence of undesirable, non-academic characteristics. The instrument provides a reliable means to assess identified non-academic characteristics of applicants to an undergraduate pharmacy program with applicability to other health science programs.

 

Performance of 4 consecutive cohorts of year 5 medical under-graduates in a 10 station OSCE
Keywords: OSCE, pediatrics, obsterics, learner assesment, student, pregraduate
Authors: Niels Illum, Anne Lindebo Holm, Henrik Thybo Christesen and Steffen Husby
Institution: University of Southern Denmark, Department of Paediatrics H, Odense University Hospital and School of Medicine, University of Southern Denmark, 5000 Odense C, Denmark

Summary: Objective structured clinical examination (OSCE) was introduced at School of Medicine, University of Southern Denmark in 2002 to assess medical student competency in gynaecology/obstetrics and paediatrics at the end of a six week mother-and-child teaching block at year 5. The OSCE had 10 test stations, each lasting 10 to 20 minutes. Objective data assessments as well as interaction with trained laypersons were included. Number of students assessed was between 76 and 82 at each OSCE. To pass 2 criteria had to be fulfilled: 50% of total points had to be answered correctly and 5 of 10 stations had to be passed with 50% of points answered correctly. Range of points obtained was 0.4 - 0.9 (mean 0.7). On the average 2 students scored below 0.5 at each OSCE and failed to pass. Sum of students scoring <50% or >90% at each station ranged from 7 to 38%, demonstrating great variance among stations in contributing to assessing clinical competence. We conclude that, at least in our hands, more precisely formulated questions at each OSCE station are needed for better learner assessment.

 

Managing change in postgraduate medical education: what the consultant saw
Keywords: organisational change, educators' role
Authors: Agius, S J.; Willis, S.; Mcardle, P.; O'Neill, P A.
Institution: University of Manchester

Summary: Background. The structure and content of postgraduate medical training in the UK are undergoing a major modernisation process. This will have a significant impact on the role of hospital consultants with educative responsibilities.

Methods: The University of Manchester has conducted a qualitative study of the culture of medical education in the SHO grade. The study includes an exploration of hospital consultants' perceptions of the modernisation process, and its impact on their role. Interviews were conducted with 28 consultants with varying education-related duties. These were recorded, transcribed and subjected to content analysis.

Results: There is widespread uncertainty about the nature of change to postgraduate medical education, particularly amongst front-line clinical educators with no additional education-management role. Even those with such roles (e.g. Medical Directors, Clinical and College Tutors)display considerable levels of anxiety and confusion about the modernisation process. There is a strong sense that educational supervisors should have dedicated time to plan and deliver training. This should be supported with appropriate and sustained training for their educational role.

Discussion: Hospital consultants are concerned about the impact of modernisation in postgraduate medical education on their own role. This is understandable given the many pressures on their time, although much of their uncertainty is a result of limited awareness about change combined with communication deficiencies from Government downwards. Development of the regional and local infrastructure that supports medical education is required. The majority of consultants are committed to the education of doctors in training, bur greater recognition and support of their role is necessary if goodwill is to be maintained.

 

Evaluation of the educational workshops "A healthy ageing". A health professional's perspective
Keywords: assessment, evaluation, qualytative
Authors: Casas JC*, Isern O*,Vall Mayans M*, Torres A*, Terricabres M**, Datzira M*, Rusiñol J*, Vidal M*, Martínez R**, Puigbí M**, Picas R*, Danés J***, Jaumira E***, Rovira A***, Rovira E***, Castro R***b, Montoriol J
Institution: Universitat de Vic

Summary: Our intervention ("A healthy ageing workshops 2003") has been designed following teaching strategies specially adapted and thought for this population.

Goal. The lecturers will evaluate the intervention developed, trough identifying those relevant elements. New guidelines for the design and the evaluation of educational interventions adapted for this population group will be proposed.

Methodology. A focus group has been developed. The participants have been the lecturers of the workshops.

Results. The contends were excessive, too theoretical, and too unidirectional. The understanding capacity of the participants was undervalued. There were differences between what was planned and what was finally done. The participatory workshops were the best strategy, because they empowered the participation and they answered concrete problems. The group dynamic originated new knowledge. The programmed sequence of the workshops determines an integration process of knowledge. The written materials, synthetic and easy reading, were valued. The participants valued that several things that they do are correct, this helps to demystify the problems. The participants expressed that they had learnt new things that will be useful to answer concrete problems. They manifested satisfaction and a spontaneous demand for new workshops.

Conclusions. New and important features in the short-term evaluation of our intervention have been detected. The workshop format is the best strategy but new teaching and divulgation materials must be elaborated as support and as reminder of the subjects. The teachers and the health professionals must learn how to plan and to adapt the educational strategies in relation to the target population.

 

Looking for improving Continual Medical Education (CME)
Keywords: Quality, Assessment, Continual medical education
Authors: Álvarez Molina Esperanza, Jiménez Ojeda Belén, Prados Castillejo José Antonio, Valverde Gambero Eloísa, Villanueva Guerrero Laura
Institution: Agencia de Calidad Sanitaria de Andalucía

Summary: Andalusian Agency for Quality in Health Care (AAQHC) assess the quality of CME activities in our Accreditation System, based on the agreements of Continual Education National Commission and Andalusian Quality Model. In this way, an on-line evaluation system which uses an own design software (ME_jora_F) has been developed.

Methodology:

1.- What quality means in CME and the different kinds of CME activities have been defined.

2.- A checklist to evaluate qualitative component in accreditation process was developed.

3.- An official application form with a complete on-line help in ME_jora_F program with a guide for designing has been included.

4.- A feed-back process to CME suppliers has been designed.

5.- ME_jora_F program was tested.

Results. After this test, accreditation process began last November. We have evaluated 57 activities. In our opinion, this model has next differential characteristics:

- It guides to CME suppliers in activities' design.

- ME_jora_F program gives information about accreditation process and every activity introduced previously for the same supplier.

- Our accreditation model let evaluators works on-line (telework). Each evaluation takes between 60 and 90 minutes.

- The end product is the result of accreditation process and a personal technical report with a positive feed-back and improving areas.

Conclusions: Accreditation is a skill for facilitating continuous quality improvement. AAQHC has developed a evaluation system of CME activities that describes as the quality level successful as especially improving areas identified for making CME with high level of Quality.

 

A Three Factor Model Underlying the Practice of Optometry: A Confirmatory Factor Analysis
Keywords: confirmatory factor analysis, optometry assessment, clinical skills
Authors: Claudio Violato and Anthony Marini
Institution: University of Calgary

Summary: Purpose. To test the fit of a model of the practice of optometry. Based on previous research, it is proposed that there are three basic factors underlying competency in optometry: 1) Clinical reasoning based on scientific knowledge, 2) Visual clinical skills assessment, and 3) Treating ocular disease.

Method. Data from all three components of the Canadian Standard Assessment in Optometry (CSAO) examinations (knowledge, clinical judgment, and clinical skills) from 243 candidates were obtained. The examinations consisted of two pencil-and-paper components (Knowledge exam consisting of 500 multiple choice questions made up of Biological and Health Sciences and Visual Sciences, Clinical Judgment - 100 MCQs) and a clinical competency exam assessing practical skills using clinical patients 1) refractive and accommodative conditions, 2) oculomotor and sensory-integrative conditions, 3) ocular and systemic disease, and 4) ophthalmic appliances.

Results. A confirmatory factor analysis (CFA) employing maximum likelihood estimation was used to test the fit the three-factor model to a variance-covariance matrix of all of the exam data. The results indicated a good fit of the model to the data (Comparative Fit Index = .92; Residual Mean Square = .03; 87% of the residuals were 0).

Conclusions. As proposed, the results provide evidence that there are three basic latent variables forming the foundations of competency of the practice of optometry: basic scientific biological knowledge and clinical reasoning, visual skills assessment, and treatment of ocular diseases and conditions.

 

Knowledge assessment at the FCS: from goal setting to student feedback
Keywords: objective learning, MCQ, integrated learning
Authors: Neto, I. and Fermoso Garcia, J.
Institution: Medical Education Unit (MEU), Faculty of Health Sciences, University of Beira Interior, Covilhã, Portugal

Summary: The new medical degree delivered at the FCS/UBI is organised in modules with an integrated approach to the human body systems covering topics in anatomy, physiology, histology and biochemistry. There is learning by objectives, in small groups, through self-learning and student-centred methodologies which allow students to acquire competencies for lifelong learning. Assessment is planned in accordance with previously defined outcomes and is carried out by means of MCQs. The questions are presented by the tutors from the group in charge of monitoring the learning of each system and are systematically reviewed by the GEM who make sure they are correctly formulated and in compliance with the learning outcomes. A specific software – qmark&#61666; – is used for online assessment and provides automatic correction. The results are immediately available to be analysed according to docimological criteria (discrimination and difficulty indexes of each question), which allows removing the questions that do not meet the established criteria. Students and tutors may check the assessment results and the key with the correct answers within one hour after the questions have been answered. The entire process is monitored by the MEU who ensure the quality both of the outcomes and the questions and the development of the process.

Conclusion:

- assessment in accordance with previously defined objectives allows students to know exactly which subjects their assessment will focus on

- MCQs enable a reliable and objective assessment

- automatic correction saves the tutors time and work

- the docimological analysis of the questions ensures quality, balanced tests and discrimination of the students' group

- the quick results enable students to get feedback in productive time

 

Teacher role profile at the Faculty of Medicine of the University of Porto: teacher's and student's perceptions as a way to collect data for self-evaluation
Keywords: teacher role, evaluation
Authors: Ferreira, A., Soares, I., Tavares, M.A.
Institution: Faculty of Medicine University of Porto,University of Minho, Portugal

Summary: The Faculty of Medicine of the University of Porto (FMUP) has concluded a process of self-evaluation. A variety of data was collected to better understand the present situation. To better characterize the role of the teacher, a specific study was conducted involving all the students and teachers of that medical school A questionnaire developed by the University of Dundee was used, where the participants were asked to point the importance given to each one of the 12 roles. The response rate was 61% for teachers and 85% for students leading to a generalization of the results. The results disclosed two important conclusions: the prevalent teacher profile is the traditional one – information provider and formal student evaluator; there is a wide gap between the importance given to all the 12 roles by teachers and students and the importance they perceive as being given by the FMUP. These data will contribute to enrich the ongoing discussion and reflection on staff development and curriculum planning. (Supported by FCT - Project POCTI/32883/99)

 

ConSortÓ is a reliable, valid and sensitive measure of knowledge structure
Keywords: Knowledge structure
Authors: McLaughlin K, Sylvain Coderre, Garth Mortis, Henry Mandin.
Institution: University of Calgary

Summary: Background. The relationship between state propositions in memory and process propositions used during diagnostic reasoning is unclear. The objectives of this study were to examine the reliability and validity of a concept sorting program (ConSortÓ) as a measure of knowledge structure and to determine the relationship between propositions in knowledge structure and propositions used during diagnostic reasoning in novices and experts in nephrology. Methods. ConSortÓ was used to identify state propositions and protocol analysis of think-aloud protocols was used to identify process propositions. Intra-rater and inter-rater reliability was evaluated using the k statistic. Construct validity was evaluated by comparing the proportions of experts and novices with deep knowledge structure. Sensitivity and specificity of state propositions as a predictor of process propositions were estimated.

Results. Thirteen first-year medical students and 19 nephrologists participated in the study. Intra-rater and inter-rater agreement for determination of knowledge structure were 100% and 90.5% respectively. The proportions of experts and novices identified as having deep knowledge structure were 82.9% and 55.8% respectively (P=0.001). The sensitivity and specificity of ConSortÓ in identifying propositions that were used during diagnostic reasoning in novices were 87.2% and 55.1% respectively. The corresponding figures in experts were 96.8% and 27.8% respectively.

Conclusions. ConSortÓ is a reliable, valid and sensitive technique for studying knowledge structure. The applicability of tools that evaluate knowledge structure should be explored either as an alternative to or as an addition to existing tools that evaluate dynamic tasks such as diagnostic reasoning.

 

Determinants of 'Exceeding Expectations' on the ITER for the Internal Medicine Clerkship
Keywords: ITER; evaluation
Authors: George Vitale, Sylvain Coderre, Marcy Mintz, Allan Jones, Kevin McLaughlin.
Institution: University of Calgary

Summary: Background: The ITER is a composite score of knowledge, clinical skills and professional attitude. The relative contribution of these to the overall ITER score is unknown. The relationship between performance on the ITER and competency has been poorly studied. Our objectives were to determine which variables influence the ITER score and to study the relationship between scores for performance and competency.

Methods: This was a prospective observational study. During a 12-months period all medical student ITERs on the medical teaching unit rotation were collected. The ITER comprises eight individual components and an overall score for performance. Also collected was hospital site, preceptor and student gender. OSCE and MCQE evaluated competency.

Results: One hundred and three students participated. Fifty-two percent exceeded performance expectations. Three variables were associated with exceeding expectations; achieving 'above expected level' rating in data skills, relationships with patients and their families, and initiative, interest and team relationships. Odds ratios for these were 22.5 [2.2, 222.1] (P = 0.008), 6.5 [1.9, 21.6], (P = 0.002) and 17.4 [3.2, 93.4] (P = 0.001), respectively. Students exceeding expectations on the ITER had higher scores on MCQE (75.1% (±7.0) vs. 70.5% (±9.3), P = 0.006) and OSCE (84.3% (±3.3) vs. 82.8% (±2.8), P = 0.02).

Discussion: Three ITER components had an independent association with exceeding expectations. These attributes appear to evaluate qualities that would make a medical student an effective team member. Students who are rated higher for performance also score higher on competency evaluations.

 

Peer-assessment in problem-based education
Keywords: peer assessment, problem-based learning, instrument
Authors: Van Achter, S.
Institution: VUB (Free University Brussels)

Summary: EPISTAT is a student-centred and competency-based way of approaching Epidemiology and Statistics undergraduate education and assessment. Problem-based and Case-based education is combined with classical ex-cathedra sessions and exercise sessions to create a powerful learning environment. Ex-cathedra and exercise sessions were evaluated using classical exams. New education methods demand however new and complementary assessment methods. EPISTAT-students are asked during 1 semester to produce per group 2 mid-semester reports and 1 final report. They also have to present their research to the other groups. Peer-assessment is used to evaluate the individual contribution of each student in the group process and products. Students are asked to score themselves and the others of the group on several pre-defined criteria. The result is an individual factor for each student we use to differentiate the group score. A standard peer-assessment calculator was developed and is available. The peer-assessment procedure was executed after each one of the three wholes and it served 2 complementary goals. The first two co-assessment procedures were primarily used to give formative feedback to students. Based on this feedback students could enhance their contribution to the group. They also were given the chance to learn how to use co-assessment. The final co-assessment procedure was used primarily in a summative way. It enabled us to give accurate individual scores for the group work activities at the end of the academic year.

 

Medical-Dental Student Exam Behaviours and Performance on Written Examinations
Keywords: performance, examination, timing, exam - behaviour, assessment
Authors: Toro-Posada, S. and Pachev,G.
Institution: University of British Columbia

Summary: Abstract: The purpose of this study was to explore the exam behaviours and performance of the second year students in the UBC Medical/Dental Integrated Curriculum during written comprehensive examinations (Multiple Choice Questions). Prior to the examination, students were given a questionnaire to gather demographic data (age, gender, academic background and English Proficiency). For each examination students completed a brief questionnaire where they recorded the time spent in the first run, time spent in the review(s) if any, number of answers changed, English proficiency relative to the exam, and time of submission. Students' score on each examination served as a measure of their performance. Preliminary analyses compared students who consented vs. students who did not consent to participate in order to determine the generalizability of results. Medical and Dental student's performance was then compared and differences in exam-behaviour patterns were sought according to students academic background, age, gender, and English proficiency.

 

Does portfolio contribute to the development of reflective skills?
Keywords: portfolio, assessment, self-evaluation
Authors: Driessen, E.
Institution: Maastricht University

Summary: Questions about the utility of a portfolio as a method for the development and assessment of reflective skills are frequently raised in the literature. However, the literature shows few studies which report answers to these questions. The purpose of this presentation is to give more insight in the practical use of a reflective portfolio in medical undergraduate education. In our research, we were specifically interested in the conditions that promote the development of reflective skills. We have interviewed teachers about their experiences with coaching and assessing students in keeping a portfolio. While doing this, we focussed on the teachers' perceptions of portfolio and reflection. We used grounded theory methodology to explore teacher perceptions in an open and broad way. All mentors in our study agreed that the process of compiling and discussing a portfolio contributes to the development of reflective ability. The thinking activities that a student undertake while compiling his portfolio are essential for this effect. Factors which are decisive for the successful use of portfolio are: mentoring, portfolio structure, the nature of student experiences, assessment and perceived benefit by the student.

 

Medical student mobility among spanish universities
Keywords: first year student/ transfer
Authors: Ocaña, L., Jiménez, L., Iríbar, MC, 1Cañizares, J. and Peinado, JM.
Institution: Faculty of Medicine. University of Granada

Summary: The Spanish university system allows students to select the medical school where they choose to begin their studies. Under certain circumstances the medical student can transfer to other universities, after the beginning of their studies. In the present study a questionnaire was answer by each of the 27 Spanish medical schools analyzing the geographical origin of the first year medical student, gender, and marks. A second questionnaire was centred in the student transfer, including number, reasons for the transfer, geographical mobility or academic formation.

The results show that:

1. A higher percentage of medical students of the first year live near the medical school.

2. 73% of the first year student are females.

3. The marks required to enter in a specific medical school varies between universities. The student with poor marks have to study far from their homes.

4. The principal reason given by the students to apply for a transfer is to live near the family residence.

5. There is a high variability among the number of transfer application received in each Spanish medical school per year.

 

Assessing clinical reasoning using subjective standardized discussion stations
Keywords: assessment, clinical reasoning, standardized discussion station
Authors: R Umansky, B Weinreb, MA Matar
Institution: Faculty of Health Sciences, Ben Gurion Univ., Israel

Summary: Aim: To broaden the assessment of proficiency of medical students after a clerkship in psychiatry, we assessed their clinical reasoning in an oral discussion station. The station subject matter was structured and scored on a standardized series of variables using "global scoring" principles, ensuring standardization whilst making good use of the assessment skills of experienced senior teaching faculty.

Method: The OSCE was divided into 5 clinical stations and 3 discussion stations, each lasting 15 minutes. Each discussion station contained 5 interlocking topics for discussion relevant to the preceding case, designed to flow through clinical issues, e.g. compulsory care; personality traits and ensuring compliance. Each section was scored individually on a likert scale. Process was scored for appropriate use of terminology, organization and clinical common-sense.

Results: The reliability and validity of the discussion stations was equivalent to that of the clinical stations in eight consecutive OSCE exams (reliability = 8.3 - 9.1). The format was perceived as no more threatening or demanding than the clinical stations complementary to them. Examiners were satisfied with the sense of reinstatement of recognition for their skills as assessors.

Conclusions: Discussion stations can be standardized satisfactorily and enable a reliable and valid means of assessment of clinical reasoning and comprehension, broadening and complementing the skills assessed by clinical OSCE stations, whilst utilizing the assessment skills of experienced senior faculty.

 

Self-audit of trainees' practical activity in anaesthesiology
Keywords: self-audit, teaching, evaluation, anaesthetic procedures
Authors: E.Moret, A.Escudero, E.Massó, M.Hinojosa, R.Rincón, R.Garcia-Guasch, J.Canet.
Institution: Hospital Universitari Germans Trias i Pujol, Badalona, Spain

Summary: Introduction: Training in anaesthetic procedures is made under regressive supervision: the more proficient the trainee becomes at a given technique, the less the amount of supervision provided by the instructor. In Spain trainee's 4-year- practical activity is taken for granted, yet it is not homogeneous. It depends on surgery plan, on personal involvement and on other departments' participation.

Goal: To achieve extensive homogeneous criteria in trainees' practical activity.

Methods: The instructors have settled teaching goals and designed a "trainee's diary" to help trainees quantify their self-audit tasks by using a data sheet. Every procedure is recorded on an electronic data bank which provides both graphical and numerical representation of the learning method. All data are reviewed by the trainee and supervised by the instructor in order to detect possible deficiencies and find solutions.

Results: The data sheet registered a 75% acceptance degree among trainees. The number of procedures for general anaesthesia during trainees' rotation is very similar. Though a wide variability is recorded in the number of procedures performed for regional anaesthesia .Once deficiencies are detected, the trainee recognizes them in 75% and modifies them in 50% of the cases.

Conclusions: The "trainee's diary" is a useful tool for objective self-evaluation of practical skills during the learning phase of basic anaesthetic techniques. It helps to detect deficiencies and find solutions and allows the construction of learning curves for basic skills. It improves training quality and contributes to homogenize the learning process of trainees.

 

Multidimensional standard setting: inter- and intra-rater reliability of the judgmental policy capturing method
Keywords: OSCE standard setting
Authors: Herold McIlroy, J.
Institution: University of Toronto

Summary: Purpose: As an evaluation tool, the OSCE is being used for increasingly complex evaluations and decisions. In our context an OSCE is used as a high stakes examination that is scored on multiple dimensions. Traditional unidimensional standard setting methodologies are insufficient for this situation. This study evaluates the inter- and intra-rater reliability of a method that is designed to explicitly incorporate the multidimensional nature of the OSCE.

Methodology: 15 judges were presented with 150 different examinees' score profiles for the multiple dimensions of the OSCE. They made overall judgments of mastery for each score profile. This process was repeated for a different mastery level. A second group of 8 judges rated the same 150 profiles, however 50 score profiles were repeated within their package, such that they rated a total of 200 profiles. For the subsample of 50 profiles, intra-rater agreement was evaluated.

Results: Average measure ICCs for the 15 judges making binary ratings were 0.93 and 0.94, while single measure ICCs were 0.47 and 0.52 for the two cutpoints, respectively. The intra-rater kappa coefficients for the 50 repeated profiles ranged from 0.60 to 0.83.

Conclusions: While average measure ICCs indicate a high level of inter-judge agreement, the extent to which a single judge's score can be generalized to the group of judges is limited. Intra-rater reliability varies even within a small subgroup of judges.

 

Design and validation of an Instrument for epidemiology training program evaluation in Argentina
Keywords: Program Evaluation, Epidemiology
Authors: Garcia Dieguez, M.; Esandi, M. E.;Branda, L.A.; Ortiz, Z.
Institution: Asociación Medica de Bahia Blanca, Centro de Investigaciones Epidemiológicas, Academia Nacional de Medicina

Summary: Training in Epidemiology has markedly increased in the last years. Evaluation of these programs is a fundamental prerequisite for quality increase of the educational offer. Objective: to design and validate an instrument for the assessment of Epidemiology Training Programs. Methodology: Three dimensions were defined for the instrument design: 1. Program Presentation (PP); 2. General contents of the program (GC); 3. Training activities (TA). For assessment of each of these dimensions, educational standards were defined. For the instrument reliability assessment, internal consistency (alfa Cronbach) and stability (inter-observer variability by means of Kappa) were measured. Different types of validity were assessed qualitatively. Results: Internal consistency was measured considering the results of 14 training programs assessments. Each of these programs was evaluated on the basis of 40 different items (4 for PP, 14 for GC and 22 for TA). Cronbach for the whole instrument was 0.93 (0.37 for PP; 0.84 for GC and 0.87 for TA). Kappa was 0.36 (P value = 0.06) for the whole instrument (0.30 in PP -P Value = 0.024; 0.40 in GC -P value < 0.001- and 0.38 in TA - P value < 0.001). Content and face validity were considered satisfactory, although the significance of some items of the instrument should be clarified. Conclusions: Content validity and internal consistency were appropriate. Its measurement allowed the assessment of the issues that were critical for the beginning of the training programs. On the other sid, the stability and face validity of the instrument should be improved

 

Metamorfosis of an OSCE for final year medical students
Keywords: OSCE, Long case, Family Medicine
Authors: Moore, P. Moraga, L
Institution: Department of Family Medicine, P. Universidad Catolica de Chile

Summary: In 2000 two teachers organised a pilot study of 20 students at the end of their Family Medicine internship. Now we evaluate all our interns and run an OSCE for 25 students each trimester. The lessons we learnt include: The importance of developing a team: Designing and running an OSCE is hard work - to make it feasible a team of teachers needs to plan together. The novelty of the OSCE and a small grant helped to create the enthusiasm to build the initial team How to maintain the interest of the teachers over time: Feedback about the OSCE and using a time structure that suits the faculty are essential to keep interest. OSCE must adapt to the needs of the students and the teachers: The students complained that our first circuit (12 ten-minute stations) disintegrated their evaluation. Today we use 2 parallel circuits each 1 hour: Circuit 1 - a structured long case with standardized patients and clinical presentation integrating the evaluation of knowledge, skills and attitudes; Circuit2 - 6 stations evaluating the application of knowledge to clinical situations Time helps students and teachers appreciate the OSCE. Initially students were sceptical of the marks obtained in an OSCE. Today, OSCEs are present in each clinical year of our medical school and have gained acceptability among students and teachers. Maintaining our OSCE reliable and valid and acceptable for students and teachers remains our challenge for the future

 

 

Students' Performance on Somatic and Psychosomatic History Taking Skills
Keywords: OSCE, history taking, interviewing, psychosomatics
Authors: Schubert, Sebastian: Kiessling, Claudia; Worthmann, Dörte
Institution: Reformstudieng Medizin

Summary: The undergraduate Reformed Track Curriculum at Charité Medical School in Berlin is a fully integrated 5 year problem-based curriculum. Curriculum planning is centered around organ- and life-phase blocks with continuous communication skills training. The fifth semester consists of four blocks: skin, emergency medicine, sensory systems as well as perception and psyche. The end of semester OSCE contains 5 communication skills stations. Three of them focus on history taking skills based on cases portraying somatic symptoms and two on history taking of psychosomatic symptoms as well as interviewing skills. Intercase reliability is known to be generally low in OSCE exams. To evaluate the case-specificy of history taking skills in our 5th semester OSCE, we correlated student performance on somatic and psychosomatic cases. To exclude possible biases we examined the influence of students' personal characteristics (gender, age, school grades, self-efficacy, coping strategies) on OSCE performance. Furthermore, we calculated the predictive value of student satisfaction with PBL-sessions and communication skills training on OSCE performance. The results will be presented and discussed on the poster.

 

Prediction Model between a variable of secondary school and her capacity to predict the university student's perfomance
Keywords: prediction; secondary; predict; perfomance
Authors: Goizueta M.; Troyano L.; Román N.F.; Barrios M; Etchegoyen, F. Institution Universidad Maimonides - Faculty of Health Sciences

Summary: Type of Study: Description study of transversal cut

Object: In this study looked for the relationship of the performance's students during the first year of faculty in the Maimónides University in the Argentine Republic and a variable of admission, the previous education experience

Methodology: Like estadistic method was made a simple lineal regression. It was checked the supposed of lineaments, normal distribution of the variable Y and conditional X homocedasticidad and independent of mistakes. The variable regressed educational experience; previous was summaryd in the analytic porcentual of secondary. The variable explicatory was the middle during the first year of the faculty. It was used the totally of the students that come into the first year of the faculty, the number was of 31.

Results: The 41 % of the variability in the promedial of the Universities students is explained by the secondary promedial like predictable variable. The model says that per each increment point in the secondary, we wait an increment of 0,71 in the University promedial

The estadistic was significant:

 

   F( 1, 29) = 20.44

 

Standardized patients in a catalan medical school: a way to learn competencies
Keywords: Standardized patient, undergraduate, competencies
Authors: Descarrega-Queralt, Ramon; Vidal, Francesc; Castro, Antoni; Solà, Rosa; Olivares, Marta; Oliva, Xavier; Ubía, Sandra; Nogués, Susana; Escoda, Rosa; González-Ramírez, Juan
Institution: Facultat de Medicina i Ciències de la Salut - Reus. Universitat Rovira i Virgili de Tarragona

Summary: In 2001 the Faculty of Medicine of Universitat Rovira i Virgili started a project on competencies learning. The participants in the project were students of the last courses of Medicine. Cases with standardized patients were the formative instrument. The competence components analysed were: history taking, physical examination and communication skills. An opinion questionnaire was undertaken by 50 participants. Through the questionnaire 18 different areas were evaluated, using a Likert scale, relating to logistics, organization, contents and learning impact. Results proved this project is feasible and well accepted, and is a good method to improve the learning process of medical students.

 

Predicting success in medical school from non cognitive aspects of student selection in Venezuela
Keywords: selection, non cognitive attributes, school admission criteria
Authors: RIGGIONE, F., Ponce,M.,Alarcón de Noya B., Requena, M.
Institution: Universidad Central de Venezuela

Summary: Since 1994, our Faculty of Medicine carries out a national examination to screen cognitive aspects of candidates and their linguistic and mathematical abilities. So, academic or cognitive criteria are the mainstay of selection process. Some studies have indicated that previous demonstrated academic ability (e.g. at school or college) is a good predictor of success at medical school while others have failed to demonstrated any significant correlation. However it is recognized that personality and attitudes are important predictors of success. We have attempted to evaluate non cognitive attributes of candidates, by using a personality test, PIHEMA, developed by one of us. This instrument allows medical schools to quantify qualitative characteristics of their applicants and it was applied experimentally in the 2001 and 2002 admission process. This study describes results obtained from application of this non cognitive psycometric instruments in 269 applicants selected for study in the Faculty of Medicine in Venezuela and compares its performance with those of mean grade obtained during the first year of the career in several health schools in the Faculty of Medicine. Preliminary results using the Pearson product-moment coefficient of correlation give positive but low and statistically significant association between applicant PIHEMA indices and the achievement in the first year of the career for students of Faculty of Medicine. Criterion related validity using PIHEMA results, candidate achievement in the preadmission examination and the performance of medical students are discussed.

 

Doing a test and learning
Keywords: test, learning, basic sciences
Authors: de la Garza González, C.E.; Morales Pérez M.E. and López Serna N.
Institution: Universidad Autónoma de Nuevo León, Faculty of Medicine

Summary: In this prospective longitudinal study, we explore the learning progress in a group of first year medical students (n= 29) in a faculty with traditional curriculum. A test consisting of 50 questions (False, True, Don't Know) was applied, in a voluntary way, three times during the course (First Test (FT) at the first day of the course, Second (ST), and Third Tests (TT) five days after the mandatory second and fourth course tests. We compare the results between the tests, as well as the results of TT and the final grades obtained for Embryology. The percentage for the correct answers for the Right Answers (CA), goes from 22.7% to 43.4% and to 60.1% for the First (FT), Second (ST) and Third test (TT) respectively, The values for the Wrong Answers (WA) increase from 21.9% FT to 23.4% ST and reaches 25.9% for the Third Test. The percentages decrease, from 50.4% FT, to 30.8% ST, to 14.4% TT for the Don't Know answers. The Students Final course Grades show a better result than those of the Third Test. According to our results, we can suggest that the students achieve better results on the Final Grade is on account to activities not directly related to the Courses mandatory tests (60% of the Final Grade), In addition, in our Students learning habits, part of the information is stored in their short term memory (mandatory tests) where after a while it is forgotten.

 

Development of a Medical History teaching module - 'Perspectives in Medicine'
Keywords: Medical History, reflective learning
Authors: Laura Adam and John McEwen
Institution: University of Dundee

Summary: Within the outcome-based undergraduate medical curriculum at Dundee, students select from a range of study components which complement the common curriculum. We have developed a two week Medical History module for second and third year students designed to put medical progress into context. Students watch and discuss the six-video BBC series "Microbes and Men". These dramatise the work, attitudes, ambitions and mistakes of Semmelweiss, Pasteur, Koch, Ehrlich and their contemporaries during the discovery of the germ theory of disease and the early treatment of infectious illnesses. Other seminars discuss, for example, the use of plants in medicine and the development of local hospitals. A further BBC video, "The Dreaded Lurgy" describes victims of medical fashions through the ages. Students are also given a guided tour of the Royal College of Physicians and the Royal College of Surgeons in Edinburgh, with museum visits, an explanation of the development of medicine and surgery in Scotland and the current role of the Colleges. By the end of the module students are required to produce an essay on the development of a subject or specialty of their own interest. Topics selected have varied widely – for example, the development of anaesthesia, the discovery of insulin, the history of caesarean section, Arabic medicine and Ancient Egyptian medicine. They are guided towards relevant sources of information but are expected to carry out their own library and literature searches. This has become a popular module which assists the development of a reflective attitude in medical students.

 

A survey of cheating on tests among Catholic University of Chile medical students
Keywords: cheating
Authors: Wright, A.; Trivino MD,X; Sirhan MD, X; Moreno MD, R
Institution: Pontificia Universidad Católica de Chile, Escuela de Medicina

Summary: Cheating is an unethical behavior. In medical schools, this represents a recurrent problem, with a reported frequency close to 60 percent. To investigate cheating on tests, an anonymous questionnaire was distributed among 97 fourth-year medical students. Students were asked whether they have seen other students cheat and their attitudes about cheating on: ethical, behavioral, and legal grounds. They also were questioned on the reasons for, consequences of, and deterrents to cheating. Of the students, 86% reported that they had seen other students cheating. Ninety percent considered cheating unethical, 77% as reprehensible, and 43% as unlawful. The main reasons for cheating were to obtain better grades (21%), insecurity about the correct answer (16%), and lack of study (13%). Eighty-six percent reported negative consequences related to cheating, 91% considered it detrimental to the student who cheats, and 63% felt cheating to be harmful to peers. Slightly more than half of the students expressed that cheating is not related to inappropriate behaviors with patient care. The expected increase in grades was mentioned as a positive consequence (60%), especially when applying for residency. The main deterrents proposed were improved test quality (35%), more effective monitoring (28%), and application of institutional regulations. Interestingly, a high percentage of students were in agreement in their responses and attitudes to cheating. It is remarkable that students perceive test cheating as unethical and having negative consequences. This constitutes the ground basis to develop a nurturing culture of Medicine, enhancing honesty, integrity, and professionalism.

 

Medical student self-assessment survey on clinical skills. School of Medicine, Catholic University of Chile
Keywords: clinical skills
Authors: Wright, A.; Trivino MD, X; Sirhan MD, M; Moreno MD., R
Institution: Pontificia Universidad Católica de Chile, Escuela de Medicina

Summary: Clinical skills are recognized as a key component of medical education. Since the School of Medicine is involved in a process of curriculum change, the evaluation of learning outcomes is imperative. The aim was to determine the level of clinical skill performance in medical students. It was measured through a self-assessment questionnaire prior to their graduation, focusing on procedural and clinical laboratory skills, diagnostic and therapeutic skills, and resuscitation skills. Seventy two students were asked whether they considered themselves proficient, competent, or under experienced in each of the identified constituents of clinical skills performance. They could also report the item as completely unknown. Among students, 90% considered themselves proficient in performing arterial pressure and pulse oximeter measurement, inhalator use, othoscopy, Snellen test, red pupil exploration, simple injury treatment and suture, suture removal, oxygen administration, intramuscular, intravenous, subcutaneous, and intradermic injections, venipuncture, and uterine height measurement. The clinical skills reported with less than 50% of achievement in competency and proficiency levels were nasal plug occlusion, nasal and ear foreign body extraction, blood type test, abnormal delivery procedure, nasal-tracheal intubations, and neonatal cardio-pulmonary resuscitation. The skills detailed as unknown by more than 5% of medical students were simple burn treatment, larynx foreign body extraction, venous denudation, Gram dye test, phototherapy system and water trap implementations, perform an autopsy, indirect laryngoscopy, Prick test, psychological and psychometric evaluation, and joint immobilization. The identification of critical issues and consequently the improvement of students' learning of clinical skills enhance and optimize training opportunities for medical students.

 

Practising Doctors Can Accept Review
Keywords: peer review, acceptance
Authors: Kaigas, T.
Institution: Cambridge Hospital

Summary: Acceptance of peer review by doctors in a Canadian community hospital was assessed using a post-review survey. In this program, practising doctors were systematically reviewed in the hospital using a multimodal review process. They then filled out a survey regarding their impression and degree of satisfaction with the review. High acceptance was demonstrated with 92% seeing the review as positive overall. Possible reasons for this are discussed and proposals presented to gain acceptance, even with sceptical groups of doctors.

 

Agreement of item difficulty between item writers and examinee responses
Keywords: Acceptability index, Difficulty index
Authors: Wanvarie, S.
Institution: Ramathibodi Hospital

Summary: Purpose. To assess agreement of item difficulty between item writers and examinee responses on comprehensive MCQ examination.

Method. Medical students at the Faculty of Medicine, Ramathibodi Hospital graduated in 2000-2003 (1994-1997 matriculated cohort) were assigned to take the MCQ at their fifth academic year. Consistency of classification of item difficulty between the item writers and examinee responses was computed with Epi-Info software.

Results. The observed agreement of classification of item difficulty between item writers and examinee responses for the academic year 2000, 2001, 2002 and 2003 were 35.6, 30.4, 34.2 and 29.4% respectively. The Kappa statistics were 0.07, 0.03 0.04 and 0.03, all p-values were < 0.01. Conclusion. There was significant association between item difficulty judged by item writers and examinees. However, the degree of agreement was low (Kappa statistics < 0.2). The finding called for better methods of teaching and item writing as well as difficulty indices classification.

 

The feasibility, reliability, and construct validity of a program director's (supervisor's) evaluation form for medical school graduates
Keywords: outcomes assessment
Authors: Steven J. Durning, Louis N Pangaro, Linda Lawrence, John McManigle and Donna Waechter
Institution: Uniformed Services University, Bethesda, Maryland 20814, USA

Summary: Purpose: We determined the feasibility, reliability and construct validity of a supervisor's survey for graduates of our institution.

Methods: We prospectively sought feedback from Program Directors for our graduates during their first post-graduate year. Surveys were sent out once yearly with up to 2 additional mailings. For this study, we reviewed all completed Program Director Evaluation Form surveys from 1993-2002. Interns are rated on a 1-5 scale in each of 18 items. Mean scores per item were calculated. Feasibility was estimated by survey response rate. Internal consistency was determined by calculating Cronbach's alpha and with exploratory factor analysis with varimax rotations. Assuming that our graduates would show a spectrum of proficiency when compared to graduates from other schools, construct validity was determined by analyzing the range of scores, including the percent of scores below acceptable level (2 or 1, see below table).

Results: 1297 surveys (81% graduates) were returned. Cronbach's alpha was 0.93. Mean scores across items were 3.81-4.2 with a median score of 4.0 for all questions (standard deviations ranged from .76-.84).

 

Performance (rating) %Graduates
Outstanding (5) 31.5%
Superior (4) 36.4%
Average (3) 25.2%
Needs Improvement(2) 3.5%
Not Satisfactory (1) .1%

Factor analysis found that the survey collapsed into 2 domains (69% of the variance): professionalism and knowledge.

Conclusions: Our survey was feasible and had high internal consistency. Factor analysis revealed two complimentary domains (knowledge and professionalism), supporting the content validity. Analysis of range of scores supports the form's construct validity.

 

Sleep loss and surgical residents' performance on a high-stakes exam
Keywords: resident performance and fatigue
Authors: Hauge, Linnea S.
Institution: Rush University Medical Center

Summary: Purpose of the Study Patient safety initiatives have led to an increased interest in sleep deprivation and surgical resident performance. The purpose of this study is to examine the effect of short-term sleep loss on cognitive performance of surgical residents.

Methodology: The IRB approved this study, and residents provided informed consent prior to participation. A survey of resident preparation for the American Board of Surgery In-Training Exam (ABSITE) was created to include questions about residents' sleep prior to the exam. Survey data was collected from residents over four years (1999-2002), yielding a total of 181 surveys (98% response).

Results: Residents were retrospectively assigned to sleep groups according to their sleep hours total for the two nights immediately prior to the ABSITE. The groups were defined, according to research findings on sleep deprivation and cognitive performance, as follows: short-term sleep loss=9.5 hours or less, moderate sleep loss=10-12 hours, and rested=12.5 or more. Preliminary analyses of the groups (K-S Z=2.004, p=.001) and the ABSITE standard scores (F=56.92, df=5, p=.001) demonstrated a difference between PGYs, indicating the use of program year as a covariate in the primary analysis. The 3 groups differed on ABSITE performance (F=81.2, df=3, p=.001), with the differences between the short-term sleep loss group and the rested group to be as great as one-half a standard deviation.

Conclusions: Surgical residents' ABSITE performance differs according to their degree of short-term sleep loss. These findings highlight a need for general surgery residencies to design programs to manage resident fatigue.

 

The survey of general physicians` views about quality of compiled and continuing education programs
Keywords: Quality- Continuing Education- GP
Authors: Marashi, T.Shakoorniya , A – Heidari soorshjani, S
Institution: Faculty of Health,Ahvaz Medical Sciences University.

Summary: Title: The survey of general physicians` views about quality of compiled and continuing education programs. The continual education has been necessarily accepted in the world, in this direction, the instructional needs and determining the priority of continuing education programs prepare the possibility of obtaining the desired quality. The present study has been done to determine Gp`s view, who have participated the compiled programs of continuing education according to quality of the program based on their contents, proportion with the occupational needs, and to make interest in specialty study. This study is a descriptive – analytical study, and the samples were 451 (GP) who have participated the continuing instructional programs in 2002. Data gathered through questionnaire The results of this study according to 4 especial research targets are to be wet forth, that 51% of all the participating, have very well evaluated the success of program in order to present the new scientific subjects; 63% of all the participating, to be proportional the programs contents with the occupational needs and 61% of all the participating the program competence on making interest in personal study. The forth-especial target of his research was the perception of the most important motivation to participate the program have evaluated, orderly, the review of information 2.70, seeking remedies in solving the professional problems 2.63, information and experience interchanges 2.84 and gaining points 3.19. This programs have been completely successful ones, but it is recommended that we could obtain the further qualitative promotion of instructions by presenting the new scientific appreciative subjects, using the various methods in performing the instructional programs, also attending to coincidence of contents with occupational needs of GP and making reasons on them by setting forth the important questions.

 

Communication skills: examiner stringency/leniency effect in a family medicine clerkship OSCE
Keywords: OSCE, Judge stringency/leniency, Communication Skills
Authors: Peter H. Harasym, Les Cunning and Wayne Woloschuk
Institution: University of Calgary

Summary: Background: Communications skills are viewed essential to physician-patient relationships in family medicine. Assessment of communications via OSCEs is difficult since the format may be prone to unexpected sources of error variance.

Purpose: This study examined the reliability of family medicine OSCE scores after a four-week, mandatory clerkship rotation and determined the amount of undesirable variability in clerks' scores due to examiner stringency/leniency effect.

Method: 63 clerks in the Class of 2004 from the first 7 blocks were evaluated at the end of the rotation using a 90 minute, six-station OSCE. At each station an examiner evaluated the clerk's communication skills using the same 28 item rating form. Twenty-eight examiners and 7 common family practice cases were used to evaluate the clerks' communication skills.

Data: Each rating form contained the clerk's name, the clinical case, the examiners name, the rotation number, and the ratings (0 = not done, 1 = tried, and 2 = done) on 28 desirable behaviours. The data were analyzed using a 5-facet (rotation, clerk, case, examiner, and question) Rasch model.

Results: Scree test provided evidence of unidimensionality. There were good infit and outfit statistics providing further evidence that the data fit the model. The reliability estimates for all facets were high (0.88-0.99). The Rasch model found significant variability in the examiner stringency/leniency effect.

Conclusions: The Rasch model provided evidence of significant undesirable variability due to examiner and had the advantage over classical test theory of removing the effect of examiners from candidate scores.

 

The effectiveness comparison of two educational methods on academic advisors Knowledge, Attitude, and Practice
Keywords: Academic Advisor, Knowledge, Attitude, Practice, Medical Students, Educational Workshop
Authors: Hazavehei, S. Department of Health Promotion and Education, School of Health, Isfhan University of Medical Sciences, Isfhan, Iran Hazavehei@hlth.mui.ac.ir
Institution: Isfahan University of Medical Sciences

Summary: The purpose of this study was to investigate the effect of two educational methods on (workshop and having educational material) the level of knowledge, attitude, and practice of Hamadan University of medical sciences. In this study, 72 AA participated in the pre-test Section (before the intervention) and 78 AA participated in the experimental program. The AA in experimental program randomly divided in two groups. The Group 1 (N=43) participated in the one day workshop as an educational method one and Group 2 (N=44) received only educational material as an educational method two. Data collection for knowledge, attitude, and practice was conducted by the valid and reliable questionnaires before educational program and after one academic semester prior to the program. The results insinuated that the significant differences existed between (p<0.001) the level of knowledge about important educational policy and regulation related to academic guiding and counseling students in pre-test group (M=10.77, SD=4.2) compare to Group 1(M=14.77), and Group 2 (M=11.54, SD=2.76). This differences existed only between Group1 with Group 2 and pre-test group. There was a significant difference (p<0.05) between the level of attitude in Group 1 (M=61.79, SD=5.78) with pre-test group (M=57.20, SD=11.6). This study shown that developing educational workshop program based on roll playing, group discussion, and group working and interaction could be affected to the behavior and attitude that result improving their skills and abilities. Finding of this research may be able to be beneficial for developing educational program for AA of universities.

 

Observation of Performance does not Improve Third Year Medical Students; Self-Assessment of Interpersonal Skills during a Third Year Clinical Skills Exam
Keywords: Self assessment, clinical skills exam
Authors: Armstrong M and Natt N.
Institution: Mayo Clinic

Summary: Background: Accurate self-assessment of performance is considered an essential skill for the physician to master. Research, however, indicates that physicians and medical students have limited ability to assess their performance. 1,2 It is therefore important to incorporate evaluation and promotion of self-assessment skills into the undergraduate curriculum. Few studies have determined the impact of observation of performance on self-assessment skills. Method: Standardized patients (SP) and third year medical students (n=43) completed the same interpersonal skills (IPS) rating form for each of 8 videotaped stations during a clinical skills exam (CSE). At the end of the exam, students viewed their videotaped performance at 2 stations and were asked to review their completed IPS forms and make any changes of self-assessment based on observation of their performance. Spearman Rank Correlation was used to compare: (i) SP and student IPS scores; (ii) Pre- and post observation student IPS scores and (iii) SP and post-observation student IPS scores.

Results: Student self-assessment pre- and post-observation showed little to no correlation with external (SP) assessment of performance (student IPS scores were lower than SP IPS scores). Observation of performances did not have a significant impact on student self-assessment IPS scores.

Conclusion: Our results indicate that medical students significantly underestimate their performance on the CSE, despite the opportunity to re-evaluate performance through observation. More research is needed to determine the role of observation in evaluating and promoting self-assessment skills.

1. Tracey J, Arroll B, Barham P, Richmond D. The validity of general practitioners self-assessment of knowledge: cross sectional study. BMJ. 1997;315:1426-28.

2. Woolliscroft JO, Tenhaken J, Smith J, Calhoun JG. Medical students clinical self-assessments: comparisons with external measures of performance and the students self-assessments of overall performance and effort. Acad Med. 1993;68(4):285-94.

 

Medical-Dental Student Exam Behaviours and Performance on Written Examinations
Keywords: Evaluation, Assessment, Performance, Behaviour, Written Examination
Authors: Toro Posada, S. and Pachev, G
Institution: University of British Columbia

Summary: The purpose of this study was to explore the exam behaviours and performance of the second year students in the UBC Medical/Dental Integrated Curriculum during written comprehensive examinations (Multiple Choice Questions). Prior to the examination, students were given a questionnaire to gather demographic data (age, gender, academic background and English Proficiency). For each examination students completed a brief questionnaire where they recorded the time spent in the first run, time spent in the review(s) if any, number of answers changed, English proficiency relative to the exam, and time of submission. Students' score on each examination served as a measure of their performance. Preliminary analyses compared students who consented vs. students who did not consent to participate in order to determine the generalizability of results. Medical and Dental student's performance was then compared and differences in exam-behaviour patterns were sought according to students academic background, age, gender, and English proficiency.

 

Assessment of the intra-service rotations in anaesthesiology and reanimation: change in methodology
Keywords: Assessment in anaesthesiology, improving trainee´s rotation, trainee´s evaluation.
Authors: RINCON, R.
Institution: Hospital Germans Trias i Pujol

Summary: Assessment of the intra-service rotations in Anaesthesiology and Reanimation: change in methodology;
Authors: Rincón R, Hinojosa M, Llasera R, Escudero A, Moret E, García Guasch R. Hospital Germans Trias i Pujol. Badalona. Barcelona (Spain).

Introduction: In order to improve the supervision of the trainee rotations, the anaesthetist in charge of each area will complete an evaluation form. The change in methodology will improve the personal performance of the trainee.

Objectives: Improve the final result, reaching the stated objectives more successfully, through the identification of the strengths and weaknesses that need to be improved.

Material and methods: Once the consultant has defined the objectives of their area, the evaluation form is completed at the halfway point and at the end of the period, both by the consultant and the resident independently. Both evaluation forms are compared and contrasted establishing the points to be improved and comparing the progress of the learner. The evaluation include seven aptitude and five attitude criteria. Both are conducted in a qualitative way with a descriptive, non-numerical scale.

Results

-All the trainees and the consultants agree to being evaluated and to evaluating respectively.
-75% of the time, the trainee is unaware of the detected errors, and once informed modified 50% of the errors. If the error is in clinical theory is easier to modify compared with the practical error, since this depends on the trainee´s learning curve in that specific technique.
-This system improves the quality of observation, the setting of objectives and evaluation.
-Academic activity was re-activated in most of the areas.

Conclusion

-The evaluation form is useful in the detection of problems.
-It improves the quality of training if both evaluations are done during each period.
-The interest of the consultants in residents´ training has been re-awakened.
-The extra work needed in this evaluation process requires an allocation of six hours a week for the tutor.

 

Rheumatology Review Course on Personal Learning Projects as a Method of Continuing Professional Development
Keywords: Personal Learning Projects; Continuing Professional Development
Authors: Bell, M., Sibbald, G.
Institution: Sunnybrook and Women's College Health Sciences Centre

Summary: Abstract.

Purpose: To determine whether Rheumatologists adopt and adhere to the use of personal learning projects (PLPs) as a method of continuing professional development (CPD) and maintenance of certification following the introduction to the concept of PLPs and their utilization within a review workshop.

Methods: Rheumatologists attending a 2 day continuing education workshop were involved in a 30 minute interactive lecture outlining the concept of learning portfolios and how to use a PLP as a method of continuing education. Attending Rheumatologists filled out a pre and post-workshop evaluation questionnaire followed by the completion of a 3 month follow-up questionnaire.

Results: 25 Rheumatologists who have been in practice for a mean of 16 years completed the pre, post and 3 month follow-up questionnaires with a similar number of males and females. Average awareness of CPD methods was 7.8 post workshop, with a slight increase in 3 month follow up results. In 2002 the average number of PLP was reported at 5.8 with a median of zero (range 0-120), while post and 3 month-workshop results show a personal increase in PLP in 2003. Time constraints still remained the number one barrier for personal involvement with CPD, while the use of paper diaries remained the favoured PLP method of recording.

Conclusion: There was an increase in Rheumatologists awareness and application of PLPs, which was sustained at the 3 month period. The benefits and ease of PLP as a method of CPD require reinforcement to improve adoption and adherence.

 

Patient Satisfaction In An Ambulatory Rheumatology Clinic
Keywords: Patient Satsifaction; Rheumatology Clinic
Authors: Bell, M., Bedard, P.
Institution: Sunnybrook and Women's College Health Sciences Centre

Summary: Purpose: To determine patient satisfaction with care in the Division of Rheumatology at Sunnybrook & Women College HSC across six domains: provisions of information, empathy with the patient, attitude towards the patient, access to and continuity with the caregiver, technical quality with competence, and general satisfaction.

Methods: Patients who had a diagnosis of chronic arthritis and had been seen in clinic on at least three prior occasions were asked to complete the Leeds Patient Satisfaction Questionnaire (LPSQ) once they had registered for the appointment. The LPSQ is a 45-item Likert scale (1-5: <3; dissatisfied: >3 ; satisfied) survey measuring satisfaction with care across the six domains described above. The attending rheumatologist and other clinic medical staff were not made aware of which patients had completed the questionnaire. All questionnaires were scored according to the guidelines of the Leeds Satisfaction Questionnaire, and were checked by two independent investigations to minimize arithmetical errors. Descriptive statistics were calculated.

Abstract

Results: Eighty-seven patients completed the questionnaire. The mean normalized Overall Satisfaction score, combining satisfaction rates across all subgroups, was4.19. The overall mean scores of the subgroups were Giving of information Empathy with the patient 4.25; technical quality of competence 4.63; Attitude towards the patient 4.17; Access to the service and continuity of care 4.00; General Satisfaction 4.00.

Conclusions: Patients appear to be very satisfied with the care they receive. Areas that could be improved in the future include patient education regarding clinic services, waiting times, and receiving urgent consultation if needed.

 

An evaluation model of posgraduate medical education
Keywords: evaluation, postgraduate medical education, student-centeredness
Authors: Infante, C., Garcia, T.
Institution: Universidad Nacional Autonoma de Mexico

Summary: Traditional evaluation of medical students focus on performance and on scientific and clinical achievement. New student centered curriculum models have to involve students as active actors of the evaluation process. This study aimed to explore how students assess their experience during the two-year masters degree on health sciences at the National University of Mexico. The technique of natural semantic networks (Reyes 1993) was used as a qualitative approach to investigate how students view their motives to engage to the masters degree, what they like the most and the least, their fulfilled expectations and the usefulness of the course. The students expressed that they engaged the program expecting to learn how to teach and do research. They were very satisfied about the academic level, the organization and program content, the methodological skills obtained, the multidisciplinary teamwork and the academic and social atmosphere. They felt they had an excessive amount of work and didn't like how the course schedules were organized. The latter factors had negative impacts on personal fatigue and on competing family duties. The usefulness of the degree was to have more knowledge and a better income, to be more efficient, to increase their level of professionalism and leadership, both in teaching and research future activities. Based on these results the study proposes a conceptual model for the assessment of the students perception of postgraduate education in health sciences that includes the initial expectations, the processes and the results of the course (Clewes 2003).

References: Clewes, D., (2003) A student-centered conceptual model of service quality in higher education. Quality in higher education, 9, 1. 69-86. Reyes-Lagunes, I.,(1993) Redes semánticas para la construcción de instrumentos. Revista de Psicología Social y Personalidad, IX, 1, 81-97.

 

Teaching and assessing multiple medical competencies using an integrative strategy of basic sciences courses
Keywords: Assessment / Capstone Experience
Authors: Garcia, M., Chinapen,S.,Hernández, C. and Pérez, A.
Institution: Escuela de Medicina San Juan Bautista

Summary: Contemporary Medical Education faces multiple requirements and challenges. Experts in medical education had defined the required profile for new physicians including knowledge, skills, duties and values. In the context of a traditional curriculum and institutional changes both in curriculum and assessment, it had been developed a periodically renewed teaching and assessment strategy integrating basic sciences courses. This is a capstone experience, conceptually based in the required profile for new physicians and contemporary issues in medical education: medical informatics, communication in medicine, basic science and clinical research and, evidence-based medicine and it is part of the institutional assessment plan. During an academic semester randomly selected groups of students work in the design and development of an original research proposal answering to a relevant question in the involved disciplines. Using a progressive approach they present: theme selection and justification, hypothesis, materials and methods selection, results and discussion. Each one of the works comprises a portfolio that is evaluated by a teacher staff using predefined criteria. All of them, include a conceptual map of the corresponding information search process. Both summative and formative assessment is conducted in all the steps and the evaluation includes peer review. The work is guided and supported by the instructors by means of workshops, meetings and electronic communication. The final work is presented to the academic community in poster and oral format. Currently is being developed the fifth version of the strategy (Integration Seminar). We will present the background, conceptual design, improvements and outcomes assessment of this methodology.

 

Determine the effectiveness of wrriten feedback on improving the teaching skils of medical teachers
Keywords: presentation skill, medical education,teacher, assessment, feedback, medical student
Authors: Koleini, N., Farshidfar, F.
Institution: isfahan universitu of medical sciences

Summary: Introduction: Effectiveness of faculty as teachers is variable, with many faculty lacking formal training. Attending faculty's teaching ability has a positive and significant effect on medical students' learning.

Methods: To determine the effectiveness of the formative assessments ,about teaching skills the teachers of Isfahan university of medical sciences, on improving the assessed skills. After the class the presentation skills were given to the students and they assessed the teaching skills of their teachers we assessed the teachers 5 times during the course. Then the feedback of the questionnaires sent to the teachers. Finally we compared the first and the fifth questionnaire.

Results: The study has shown significant improvement on teaching skills of teachers (p<0.05). The provision of written feedback improved the ratings of teaching effectiveness, especially among the faculty who had been rated below average

Conclusion: Objective structured evaluation with reporting the feedback to the teachers- as a method of developing faculty's teaching skills- can be successfully applied to assessing faculty-teaching performance. However, it may be no more discriminating than are student evaluations.

 

Comparison of intern,s attitude related to social Medicin
Keywords: Attitude, Social Medicine, Intern
Authors: Jalili, Z.
Institution: assitant proffesor

Summary: Backgrround: Intern,s attitude have strong correspondence with their observation and judgments.It actually one of the effective factors influencing the development and modification of medical education. Objective:The present study was carried out in order to comparing the the Attitude of interns before and after training course.

Methode: The quasi-experimental study was carried out via convenience sampling on 100 subjects in the 2002-2003.the data gathered via questionair with internal consistent coefficient (0.86) and (o.89) befor and After study,respectively.Interns filled pretest (before taking the courses)and post test (after taking the courses)questionairs whitch were compaed and analyzed through parametric and non parametric tests. Finding s: There was significant relationship between mean of attitude score pre and post test(p<o.o5).Inorder to compare the ranking of each attitude statements during the two stages sing test were carried out. All 27 statements, showed significant difference (p<o.o5).No significant differences were observed between sex variables in pre and post training> courses.

Conclusion:According result of the study researchers found out the important point that social medicin training courses had considerable effect on inten,s attitude and could cause alternations in their attitude to wards social medicin obejectives.

 

What about your ability of having a learning conversation with your trainee ?
Keywords: General Practice, train-the-trainers,learning conversation, standardised students
Authors: (1) Schol, S.; (2) Goedhuys, J.
Institution: (1) Free University of Brussels (2) Catholic University of Leuven

Summary: In Flanders, Belgium, general practice trainees perform a large part of their training in the practice of a one-to-one trainer. Learning in this practice does not automatically take place. Everyday work-situations have to be transformed into learning opportunities. Therefore trainers have to be able to discern and perform different types of trainer-trainee conversations. At our Centre we have developed a Multiple-Station Teaching Assessment Test (Schol, S. A Multiple Station Test of the Teaching Skills of General Practice Preceptors in Flanders, Belgium. Academic Medicine, 2001; 76: 176-80). As in an OSCE, trainers have to perform a number of conversations with standardised trainees. For each type of conversation, a checklist for communication analysis was developed. The workshop gives the participants insight into the different types of trainer-trainee conversations, the checklists developed for each of them and the roles of the standardised trainees to evoke these different types of conversations. Furthermore the participants will be able to perform the conversations as a trainer with the simulated trainees who are provided by the authors. The workshop consists of two parts. The first part consists of illustration of the following types of conversations: constructing a learning agenda, leading an advisory conversation, having an exchange of information about practice visits, having a case-related discussion, having a feedback conversation and having an intermediate evaluation conversation. In the second part of the workshop a mini-version of the MSTAT will be organized. At the end of the workshop experiences will be exchanged.

 

Direct Observation and the Evaluation of Clinical Skills
Keywords: Clinical skills, Evaluation, direct observation
Authors: Holmboe, E.; Huot, SJ.; Hawkins, RE
Institution: Yale University, National Board of Medical Examiners

Summary: Intended audience: Any medical educator involved in the observation and evaluation of student or resident trainees, especially for the clinical skills of communication (history-taking and counseling) and physical examination.

Expected educational outcomes:

1. Understand how direct observation of competence (DOC) training can improve the observation and evaluation of clinical skills by faculty

2. Understand the evidence supporting the direct observation of competence (DOC) training method.

3. Improved participant observation and evaluation of clinical skills

Limit of the number of attendees: Recommend limit to 40.

Format of workshop: This will be an interactive workshop with use of videotapes, group exercises, group discussion, and brief didactic presentations.

Proposed duration of workshop: Prefer 2.5 hours

Description of workshop: Participants will learn about the importance of and the methods involved in direct observation of competence (DOC) training. Results from a randomized controlled trial regarding the efficacy of DOC training will be reviewed. As part of DOC training, participants will be introduced to rater error training (RET), performance dimension training (PDT), frame of reference training (FOR) and behavioral observation training (BOT). Working in small groups, participants will perform PDT and FOR training exercises and then use this experience to judge clinical performance on several scripted videotapes of a standardized patient and trainee. Finally, participants will practice developing and using several types of evaluation forms used in the direct observation of clinical skills.

 

Evaluating Trainees "Synthetically"
Keywords: evaluation assessment
Authors: Pangaro, L.
Institution: Uniformed Services University

Summary: Purpose: Evaluations of trainees on clinical rotations (clerkships, attachments, etc.) are often not trusted, and remain one of the most vexing problems we face. One approach to improve such evaluations is to shift the evaluation terms used by teachers away from the traditional "analytic" model (knowledge, skills and attitudes). A new framework combines these three attributes in behavioral terms with a developmental model; it is called "synthetic".

Consistent with both classical philosophy (observation-reflection-action) and the day-two-day methods of physicians (history/physical-assessment-plan), the synthetic framework describes performance goals for trainees using the following progression: Reporter, Interpreter, Manager/Educator (RIME) (Acad. Med.1999; 74:1203 –7). The framework uses a developmental approach, and distinguishes between basic and advanced levels of performance; each step represents a synthesis of skills, knowledge and attitude, a final, "common pathway" of professional competencies. This Approach has been successfully applied in multiple clerkships and institutions in the United States (Am J Obstet Gynecol 189(3): 666-9, 2003. Acad Med 2001 76: S105-S107).

Workshop Goals and Methods: We will review different frameworks for describing professional progress (analytic, developmental and synthetic) and explain how the use the "RIME" model in the clinical setting. Participants will work through undergraduate and graduate examples, and use role-play to explore the strengths and limitations of the synthetic vocabulary, emphasizing direct observation of trainees. The specific English-language terms are less important than the conceptual framework, and workshop participants will explore alternative phrasings of the synthetic vocabulary in their own languages.

 

The Development and Conduct of a Structured Oral Examination
Keywords: structured oral, assessment
Authors: Gary Cole MA, Ph. D.; Senior Research Associate Royal College of Physicians and Surgeons of Canada; Nadia Z. Mikhael, MD, FRCPC,FCAP, Director of Education Royal College of Physicians and Surgeons
of Canada
Institution: Royal College of Physicians and Surgeons of Canada

Summary: The structured oral is an effective, standardized method for measuring a variety of competencies. It can be used for both formative and summative assessment. This workshop will demonstrate how to develop and conduct a structured oral. Participants will first learn how to create a structured oral along with the advantages and disadvantages of different rating techniques. They will then be shown how to conduct and rate a structured oral. Twelve videos illustrating proper and improper techniques for questioning, presenting information and managing a structured oral session will be show and discussed.

 

An innovative evaluation and promotion system for post-MD trainees
Keywords: Evaluation, promotion guidelines, remediation, early identification
Authors: MacLellan, A.
Institution: McGill University,Montréal, Canada

Summary: Through an innovative evaluation and promotion system, the Faculty of Medicine of McGill University has been able to identify much earlier than previously, residents (post MD trainees) in difficulty (either in academic or non academic/professional difficulty). By having the workshop participants work through case discussions on evaluation and promotion issues, this workshop will focus on the applicability of the McGill evaluation and promotion model to other universities and other jurisdictions. It will also address the issue of supervisors (clinical teachers) not wishing to give negative feedback and how this model overcomes this problem. The internal appeal process and its benefits to ensure fairness and remove bias for the trainee will also be discussed. Data from before the new system was put in place and after will clearly demonstrate that residents with problems are identified earlier and that many can be helped by remediation. The participants will be able to discuss the guidelines, their relevance for their trainees and the teaching supervisors, the usefulness of a remediation system and the appeal process. The participants will receive a copy of the McGill Evaluation and Promotions Guidelines and of the generic McGill evaluation form. The goal is for the participants to be able to take all or parts of this system back to their jurisdiction. This presentation should be carried out as a workshop, but could be presented as an oral presentation, or as a poster which would be the sequel to a poster presented at the 10th Ottawa Conference.

 

High stakes clinical skills assessment: a team approach to the development of standardized patient case material
Keywords: Standardized Patient Case Development
Authors: King, A.
Institution: National Board of Medical Examiners

Summary: The use of standardized patients for high stakes testing of clinical skills is increasing across the continuum of medical education. In order to ensure the high quality of such assessments, it is essential that rigorous attention be given to case development methods. Improving the development of the materials used to train the standardized patients and record or score the examinee encounters is one method for improving the quality of the assessments.

Standardized patient case development requires the inclusion of material related to the patient's life story and information that will be used to score the examinee. The patient's life story includes aspects of their current medical history, past medical history, social history, and family history. Scoring rules need to be clearly defined, with structured guidelines for completing the recording or scoring instrument in a consistent manner. This presentation will outline the iterative process that underlies the team approach that is used for the development of a large-scale assessment of clinical skills. The team approach that is utilized by the National Board of Medical Examiners (NBME) for the Step 2 Clinical Skills Examination (Step 2 CS exam) for the United States Medical Licensing Examination (USMLE) incorporates standardized patients, trainers, and physicians. Each member of the team contributes unique information which enhances the case materials.

Creative Commons License Todo el contenido de esta revista, excepto dónde está identificado, está bajo una Licencia Creative Commons