SciELO - Scientific Electronic Library Online

vol.18 número4Development of a stepwise tool to aide primary health care professionals in the process of deprescribing in older personsThe pharmacy workforce in public primary healthcare centers: promoting access and information on medicines índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados




Links relacionados

  • En proceso de indezaciónCitado por Google
  • No hay articulos similaresSimilares en SciELO
  • En proceso de indezaciónSimilares en Google


Pharmacy Practice (Granada)

versión On-line ISSN 1886-3655versión impresa ISSN 1885-642X

Pharmacy Pract (Granada) vol.18 no.4 Redondela oct./dic. 2020  Epub 15-Mar-2021 

Original Research

Training and standardization of simulated patients for multicentre studies in clinical pharmacy education

Karina A Resende (orcid: 0000-0001-7812-7042)1  , Afonso M Cavaco (orcid: 0000-0001-8466-0484)2  , Márcia D Luna-Leite (orcid: 0000-0002-2084-8733)3  , Bianca R Acacio (orcid: 0000-0002-2643-1170)4  , Núbia N Pinto (orcid: 0000-0001-5626-2569)5  , Maria D Neta (orcid: 0000-0003-1279-4541)6  , Angelita C Melo (orcid: 0000-0002-2714-7171)7 

1MSc. Federal University of São João Del-Rei. Divinópolis, MG (Brazil).

2PhD. Associate Professor in Social Pharmacy. Faculty of Pharmacy, University of Lisbon. Lisbon (Portugal).

3PhD. Foundation for Scientific and Technological Development in Health (FIOTEC). Rio de Janeiro, RJ (Brasil).

4MSc. Federal University of Mato Grosso do Sul. Pioneiros, MS (Brasil).

5Federal University of São João Del-Rei. Divinópolis, MG (Brazil).

6MSc. Federal University of Piauí. Teresina, PI (Brasil)

7PhD. Associate Professor in Clinical Pharmacy. Federal University of São João Del-Rei. Divinópolis, MG (Brazil).



To evaluate the training and standardization methods of multiple simulated patients (SPs) performing a single scenario in a multicenter study.


A prospective quasi-experimental study, using a multicenter approach, evaluated the performance of five different individuals with the same biotype during a simulation session in a high-fidelity environment. The SPs training and standardization process consisted of four steps and six web or face-to-face mediated: Step 1: simulation scenario design and pilot test. Step 2: SPs selection, recruitment and beginning training (Session 1: performance instructions and memorization request.) Session 2: check the SPs’ performances and adjustments). Step 3 and session 3: training role-play and performance's evaluation. Step 4: SPs’ standardization and performances’ evaluation (Sessions 4 and 5: first and second rounds of SPs’ standardization assessment. Session 6: Global training and standardization evaluation. SPs performance consistency was estimated using Cronbach's alpha and ICC.


In the evaluation of training results, the Maastricht Simulated Patient Assessment dimensions of SPs performances “It seems authentic”, “Can be a real patient” and “Answered questions naturally”, presented “moderate or complete agreement” of all evaluators. The dimensions “Seems to retain information unnecessarily”, “Remains in his/her role all the time”, “Challenges/tests the student”, and “Simulates physical complaints in an unrealistic way” presented “moderate or complete disagreement” in all evaluations. The SPs “Appearance fits the role” showed “moderate or complete agreement” in most evaluations. In the second round of evaluations, the SPs had better performance than the first ones. This could indicate the training process's had good influence on SPs performances. The Cronbach's alpha in the second assessment was better than the first (varied from 0.699 to 0.978). The same improvement occurred in the second round of intraclass correlation coefficient that was between 0.424 and 0.978. The SPs were satisfied with the training method and standardization process. They could perceive improvement on their role-play authenticity.


The SPs training and standardization process revealed good SPs reliability and simulation reproducibility, demonstrating to be a feasible method for SPs standardization in multicenter studies. The Maastricht Simulated Patient Assessment was regarded as missing the assessment of the information consistency between the simulation script and the SPs provision.

Key words: Patient Simulation; Simulation Training; Education; Pharmacy; Pharmacists; Clinical Competence; Clinical Decision-Making; Educational Measurement; Reproducibility of Results; Brazil


Health training using simulated patients (SPs) is increasing in pharmacists’ education, essentially aimed to broaden students’ clinical skills.1-4 The use of SPs provides safe clinical settings for student training and can also be advantageous for researching their competencies. Different health conditions can be simulated and distinct individuals can be recruited to perform the same scenario.5 To assess the competencies of community pharmacists in minor ailments, such as headache and acute gastroenteritis complaints, as well as in emergency contraception counselling, researchers have used SPs.6-10

In addition to high-fidelity simulation environment, defined as “a controlled learning environment that closely represents reality” the training and standardization of SPs are important to have an accurate reproduction of clinical scenarios.11,12 In research, the training quality is a determinant of success because of the risk of bias introduced by the SPs. Strictly trained and validated SPs are critical for the simulation experience to be consistent with the objectives proposed.12-16,17 They must present repeated and reliable performances to ensure the equivalence and realism of the simulation experience for each participant.18 The clinical, social, emotional, and psychological aspects must be virtually the same in all simulations even though the acting persons are different.16 The reliability of SPs should be assessed using recommended psychometric methods, such as the Maastricht Simulated Patient Assessment instrument, widely used in medical education for SPs standartization.17,19

To make use of SPs, accurate methodologies are needed to accomplish SPs training and standardization, thus ensuring the necessary reproducibility of the scenario being played.18,20 Generally, authors do not report the training required in enough detail for replication by other educators and researchers willing to use SPs.18 In multicenter research, methods for training and standardizing SPs are even more important, as they may reduce bias and decrease the costs and time to develop the required SPs accuracy. As far as we know, there are no studies that have evaluated the training and standardization of SPs in a multicenter study with large distances between the centers’ locations. The present study aimed to evaluate the training and standardization methods of multiple SPs performing a single scenario in a multicenter study, helping those interested in the best use of SPs for clinical pharmacy education and research.


This study followed a quasi-experimental prospective design and was described considering the checklist of quality studies with SPs.20 It took place between July and November 2019 and aimed to evaluate the performance of five different women with the same biotype, and representing one SP in a high-fidelity simulation environment. The study consisted of a national multicenter approach, which involved one federal university per administrative Brazilian region: São João Del-Rei University (Southeast region); Federal University of Pará (North region); Federal University of Piauí (Northeast region); Federal University of Mato Grosso do Sul (Midwest region) and the Federal University of Rio Grande do Sul (South region). The study was carried out in geographic regions with cultural and oral expression (e.g. pronunciation) differences, putting an additional challenge in SPs to perform the same simulated scenario.21

To develop the training and standardization of SPs methodologies, a literature search identified at least four different methods, presented in Online appendix.22-25 From here the research team developed by an iterative consensus a study methodology comprising several steps. The simulation scenario involved a female patient, 28 years old, and working full-time as a secretary in an accounting office; she was married and a mother of a 2-years-old boy. She seeks care from a pharmacist while experiencing mild allergic rhinitis and no other medical conditions. She is lucid, time- and space-oriented, active, collaborative, and quite talkative. The SPs wore casual clothes (t-shirt and jeans) and no makeup. The final scenario included features intended to reduce student clinical performance bias such as season, hour and room temperature indication. Considering the environmental, cultural, and social differences between the five administrative Brazilian regions, a pilot simulation test was carried out in each research center to confirm all scenario features. All simulated encounters were videotaped.

The SPs training and standardization process consisted of six steps and five sessions the early in person at the last at a distance (online). The group of SPs consisted of individuals with the same background degree (pharmacy) and included two graduates and three postgraduates (Master and PhD), aged between 21 and 30 years. None of the SPs had previous experience as simulation actors. After scenery design, the SPs training and standardization methods were developed including reliability and reproducibility assessments.26-31 In the first step, the simulation scenario was developed by the research team according to quality simulation guidelines. On step 2, each research center recruited a candidate to act as an SP, following the characteristics established on the scenario. Session one, web meeting occurred 7 days before the session 2, the candidates signed the Informed Consent Term (ICT) according to the standards of best practice (SOBP) of the Association of Standardized Patient Educators (ASPE).32 In session 2 the training phase began, was individual and face-to-face in each research center. The purpose was to check the SPs’ performances and make regional and cultural adjustments in each region. One researcher performed the pharmacist with the each SP. After the simulation, oral and written feedback on each SP performance was provided. All simulations were recorded, videos analyzed, and changes in performance discussed.

The Step 3, as well as the next steps, were online meetings of 120 to 180 minutes with all five SPs and two researchers. The objective was to finish the training and start the standardization of the SPs. The session 3 occurred one month after the second one. The simulations clinical interview with one researcher as pharmacist and the SPs were repeated in this sequence: Southeast, South, Midwest, and Northeast. After each simulation and before de next one occurred the training of performance's evaluation with: 1) a qualitative self-assessment of performance was given by the SP, 2) a qualitative evaluation was done by the other four SPs and the two researchers (on this sequence) followed by feedback of performance adjustments and an objective assessment using Maastricht Simulated Patient Assessment Instrument of the five SPs by the seven participants. During the third session it was possible identify good role-playings in Maastricht Simulated Patient Assessment Instrument evaluation with incorrect or missed information or either the new data included by SPs. So one block to “content fidelity” was designed to cover this construct facet of a good role-playing. The five new items were submitted to external peers and educational experts and were evaluated separately of Maastricht Simulated Patient Assessment Instrument, Table 1.

Table 1.  Simulated patient performances’ assessment using Maastricht Simulated Patient Assessment 12 and the additional questions about fidelity of scenery contend 

Variable1 First Round of Evaluation Second round of Evaluation
% (n) Cronbachalfa ICC 95% % (n) Cronbachaffa ICC95%
SP appears authentic 0.568 0.598 [-0.334: 0.925] 0.699 0.696 [0.124:0.940]
Moderate agreement 51.4 (18) 37.1 (13)
Complete agreement 48.6 (17) 62.9 (22)
SP might be a real patient
Moderate agreement 48.6 (17) 0.614 0.616 [-0.130:0.925] 42.9 (15) 0.758 0.735 [0.271:0.947]
Complete agreement 51.4 (18) 57.1 (20)
SP is clearly role-playing
Complete disagreement 22.9 (8) 0.936 0.934 [0.810: 0.987] 22.9 (8) 0.844 0.855 [0.531:0.972]
Moderate disagreement 45.7 (16) 77.1 (27)
Not applicable 31.4 (11) -
SP appears to withhold information unnecessarily
Complete disagreement 40.0 (14) 0.820 0.827 [0.487:0.966] 45.7 (16) 0.837 0.806 [0.446: 0.961]
Moderate disagreement 60.0 (21) 48.6 (17)
SP stays in his/her role all the time
Complete disagreement 40.0 (14) 0.722 0.722 [0.190:0.945] - 0.879 0.874 [0.637:0.975]
Moderate disagreement 60.0 (21) -
Not applicable - 2.9 (1)
Moderate agreement - 42.9 (15)
Complete agreement - 54.3 (19)
SP is challenging/testing the student
Complete disagreement 34.3 (12) 0.871 0.881 [0.641:0.977] 28.6 (10) 0.978 0.97 [0.935:0.996]
Moderate disagreement 65.7 (23) 68.6 (24)
Not applicable - 2.9 (1)
SP simulates physical complaints unrealistically
Complete disagreement 48.6 (17) 0.815 0.817 [0.446:0.964] 40.0 (14) 0.878 0.845 [0.565:0.969]
Moderate disagreement 51.4 (18) 60.0 (21)
SP appearance fits the role
Complete disagreement 2.9 (1) 0.338 0.314 [-0.764:0.857] - 0.861 0 852 [0.577:0.971]
Moderate disagreement 5.7 (2) -
Moderate agreement 51.4 (18) 48.6 (17)
Complete agreement 40.0 (14) 51.4 (18)
SP answers questions in a natural manner
Moderate agreement 65.7 (23) 0.697 0.720 [0.112:0.947] 62.9 (22) 0.771 0.789 [0.346:0.960]
Complete agreement 34.3 (12) 37.1 (13)
SP starts conversation with the student(s) during the time-out
Not applicable 45.7 (16) 0.808 0.815 [0.451:0.964] 2.9 (1) 0.947 0.943 [0.836:0.984]
Moderate agreement - 62.9 (22)
Complete agreement 54.3 (19) 34.3 (12)
Additional items to Maastricht Simulated Patient Assessment Instrument: scenario content's fidelity
1 -Relevant scenario information was missing and would be made available by the patient's spontaneous speech
Complete disagreement 14.3 (1) 0.734 0.665 [0.195:0.937] 57.1 (4) 0.881 0.891 [0.672:0.979]
Moderate disagreement 42.9 (3) 42.9 (3)
Not applicable 42.9 (3)
2-Scenario information was spontaneously made available that would only be provided upon direct questioning
Complete disagreement 42.9 (3) 0.804 0.818 [0.445:0.965] 14.3 (1) 0.855 0.854 [0.567:971]
Moderate disagreement 28.6 (2) 71.4 (5)
not applicable 28.6 (2) -
Moderateagreement - 14.3 (1)
3 -The PS showed that it did not memorize the content correctly and thus modified or introduced new information in the standardized scenario
Complete disagreement 14.1 (1) 0.702 0.672 [0.119:0.933] 57.1 (4) 0.833 0.845 [0.532:0.970]
Moderate disagreement 57.1 (4) 42.9 (3)
Not applicable 28.6 (2) -
4 - The PS was vague in its responses when it should have been objective
Complete disagreement 57.1 (4) 0.805 0.793 [0.458:0.960] 71.4 (5) 0.845 0.814 [0.441:0.964]
Moderate disagreement 28.6 (2) 28.6 (2)
Not applicable 14.3 (1) -
5 - The PS was objective in its responses when it should have been vague
Complete disagreement 71.4 (5) 0.682 0.784 [0.665:0.934] 71.4 (5) 0.734 0.694 [0.195:0.937]
Moderate disagreement 28.6 (2) 14.3 (1)
Not applicable 14.3 (1)

1Variables with Likert scale: complete disagreement, moderate disagreement, not applicable, moderate agreement and complete agreement. Showed only cells with values

In step 4, the objective was SPs’ standardization and performances’ evaluation. The session 4, with all five SPs and one researcher as a pharmacist, lasted about 180 minutes. The web meeting took place within a week of the third session, to give SPs enough time to prepare the necessary adjustments. Simulated interviews occurred in the same way as in session 3, but fifteen days apart and the performances’ evaluations were done individually after all simulations. The evaluation results and the individual feedbacks were sent by e-mail. The full set of taped simulations was scored for each SP by 3 independent raters (First round of evaluation). Fifteen days after session 4, the Session 5 took place with the same process protocol (Second round of evaluation).

The Maastricht Simulated Patient Assessment instrument was used to evaluate the SPs’ standardization.19 It comprises 20 items divided by two main blocks:

“Authenticity during the consultation” and “Feedback after the consultation”. All the questions of authenticity were represented in the first column of Table 1. Each item is rated on a 5 points scale, running from complete disagreement to complete agreement. The questions related to feedback were withdrawn from our study since the simulated patient did not perform the students’ feedback. While the research team was aware of the wide dissemination and use of Maastricht Simulated Patient Assessment, it was also discussed the need to evaluate SPs fidelity to the scenario content i.e. to assess ad-hoc deviations from the proposed script. Since the simulated patient did not provide feedback to students after the simulation, the questions related of it in Maastricht Simulated Patient Assessment were not used on this study.. The closing web-meeting the “Evaluation of SP perceptions on the training programme” Instrument was applied to explore the perceptions of the SPs about the usefulness and acceptability of the training method.33 This questionnaire is divided into 3 blocks: A. My experience as an educator, B. My experience with the SP training workshop, C. My rating of the training workshop (scale of 1 to 10), and D. Any additional comments of own experiences in peer and self-evaluation during the SP workshop.33 The standardization process were evaluated by consistency of SPs performance. It was estimated using Cronbach's alpha, with alpha values between 0.70 and 0.90 considered acceptable.26 SPs scores correlations were also assessed by the intraclass correlation coefficient (ICC). ICC values were considered poor if <0.4; satisfactory to good if 0.4<ICC<0.75; and excellent if ICC ≥0.75.34 The data analysis consisted of descriptive statistics, with estimates of proportions and percentiles. All analyses were performed in IBM SPSS v24 and used a statistical significance of 95%.

The study was conducted according to the guidelines of the Declaration of Helsinki and the provisions of the National Health Committee of Brazil. The study received approval by the Research Ethics Committee Involving Humans of the Dona Lindu Midwest Campus of the Federal University of São João Del-Rei (CEPCCO No. 2,853,052).


The Table 1 shows the results of Maastricht Simulated Patient Assessment dimensions. The first round of evaluation dimensions “it seems authentic”, “can be a real patient” and “answered questions naturally”, presented moderate or complete agreement in 100.0% of the simulations; “seems to retain information unnecessarily”, “remains in his role all the time”, “challenges/tests the student”, and “simulates physical complaints in an unrealistic way” presented moderate or complete disagreement in 100.0% of the simulations. “It is clearly role-playing” presented complete disagreement in 22.9% of the simulations, moderate agreement in 45.7% of the simulations, and was not applicable in 31.4% of the simulations. “Appearance fits the role” showed complete disagreement in 2.9%, moderate disagreement in 5.7%, moderate agreement in 51.4%, and complete agreement in 40.0% of the simulations.

In the second round, “appears authentic”, “might be a real patient”, “answers questions naturally” and “appearance fits the role” showed moderate or complete agreement for 100% of the simulations; “is clearly role-playing” and “simulates physical complaints unrealistically” showed moderate or complete disagreement for 100% of the simulations; “Appears to withhold information unnecessarily” showed complete disagreement, showed moderate disagreement or was not applicable in 45.7%, 48.6%, and 5.7% of the simulations, respectively; “stays in her role all the time” was not applicable, showed moderate agreement or showed complete agreement in 2.9%, 42.9%, and 54.3% of the simulations, respectively; “is challenging/testing the student” showed complete or moderate disagreement or was not applicable in 28.6%, 68.6% and 2.9%, of the simulations, respectively; and “starts a conversation with the student(s) during time-out” showed was not applicable, showed moderate agreement or showed complete agreement in 2.9%, 62.9%, and 34.3%, respectively (Table 1).

The Cronbach's alpha value in the first round varied from 0.338 to 0.936, and the ICC values from 0.314 to 0.934. In the second round of simulations, there was an improvement in the Maastricht Simulated Patient Assessment parameters, in which all the Cronbach alphas increased (0.699 to 0.978). The same was observed for the ICC values (0.424 to 0.978), indicating good agreement between the raters regarding the simulation parameters observed. The additional block comprised of five items to assess the SPs fidelity to the scenario content presented in the first round Cronbach's alpha values between 0.682 to 0.808 and ICC values varying from 0.665 to 0.815. In the second round of simulations, the Cronbach's alpha values increased (0.734 to 0.947) and the same was observed for ICC (0.694 to 0.943), indicating in this case the scale good internal consistency as well as the agreement between the raters.

The results of the perception questionnaire applied at the final step showed that SPs were satisfied with the training method and standardization process (Table 2). The overall average score received for the training program was 8.6 out of 10. Three SPs said they had improved in terms of their role-play authenticity, while two had improved information retention, and one had not forgotten the role details. Two SPs said the training helped them understand how to improve the simulation for clarity and indicated “I learned by watching other people's performance.” There were no negative comments about the training method.

Table 2.  Self-perception questionnaire for SPs training and standardization27  

Questionnaire items Always % (n) Frequently % (n) Sometimes % (n) Occasionally % (n) Never % (n)
My experience in school, university or college
I have assessed my work/performancein private in a formal manner previously in pre-university education 20.0 (1) 40.0 (2) - 40.0 (2) -
I have assessed my colleagues’ work in private in a formal manner in pre-university education - 60.0 (3) - 20.0 (1) 20.0 (1)
I have self-assessed my work performance openly in front of my peers (class) during pre-university education - 60.0 (3) - 20.0 (1) 20.0 (1)
I have self-assessed my colleagues’ work performance openly in front of peers (class) during pre-university 20.0 (1) 40.0 (2) - 20.0 (1) 20.0 (1)
SP training workshop: my experience
I felt shy when providing feedback on myself to the group 20.0 (1) 40.0 (2) - 20.0 (1) 20.0 (1)
I learned many things that I did wrong when I did the self-assessment 60.0 (3) 20.0 (1) 20.0 (1) - -
I felt awful when I was providing feedback to others on their performance - - - 40.0 (2) 60.0 (3)
I learned many things when my peers/doctors evaluated me which I would never have thought of myself 60.0 (3) 20.0 (1) 20.0 (1) - -
I felt uncomfortable when others were providing feedback on my performance - - 20.0 (1) 60.0 (3) 20.0 (1)
I felt harassed when others were providing feedback on my performance - - 20.0 (1) 60.0 (3) 20.0 (1)
I used the points shown during self and peer assessment to improve my performance at practice CSU session 40.0 (2) 60.0 (3) - - -
Any specific aspect that I was able to improve on when the self-assessment and peer assessment was done on role play
Authenticity of role 60.0 (3) 20.0 (1) 20.0 (1) - -
Withholding information 20.0 (1) 60.0 (3) 20.0 (1) - -
Forgetting the role 20.0 (1) 40.0 (2) 40.0 (2) - -


This study was developed to assess a training process for SPs standardization within multiple simulation centers for research purposes, seeking to achieve equivalent SPs calibration to avoid bias on later research stages. Health simulation involving human actors interacting with students has been used as a method for assessing health professionals’ competence.35 Some pharmacy courses have implemented simulation in their curriculum as a way to optimize the training process.36 However, the use of SPs for research education in multiple and different settings requires greater precision in SP training and performance, looking for controlling the possibility of scenario bias introduced by the SPs.37

Knowing the need to standardize SPs for research purposes and to obtain reliable results, authors developed a possible procedure for multi-centered studies, evaluating its efficacy and reliability. The method considered the large distances between study centers and the use of SPs with cultural discrepancies, looking to reduce funding costs, travel and time restrictions or lower availability of SPs for displacement.21,38-41 In the literature, it was possible to identify at least four different standardization methods, referred previously.22-25 Their authors presented the necessary SP production components, with variable depth and perceived differences. Our standardization protocol incorporated most recommendations of the previous approaches, aligning in a single and clear study protocol the designed options (Figure 1).

Figure 1.  Proposed method for training and assessment of simulated patient in multicenter studies 

According to the SOBP of the ASPE, training can be performed in various formats (e.g., face-to-face, online, or combined).32 In the context of multicenter studies, considering the difficulty of face-to-face meetings, the combined format was chosen, with study results showing it was an adequate option. Candidates were recruited following the criteria established by the ASPE SOBP.32 Characteristics such as age appropriateness for the role and proximity to the pharmacy area were respected in the study. Sending the scenario to the prospective SPs in advance may have contributed to easier memorization of the script content. The SPs were required to remember the relevant facts and the background of their scripting function to achieve good performance.37 A relevant point was the inclusion of the SPs in all steps of the study: this allowed for direct interaction and knowledge exchange between the five SPs. This feature of SPs working together and with other staff was reported as a key point for simulation improvement.37 All SPs received individual feedback, which also helped to improve performance by discussing the necessary adjustments.32 Studies have identified that feedback was a valuable tool to increase understanding and consolidate information.42,43 Another important aspect of our study was the SPs observation and assessment of the performance of their peers, which helped self-assessment and self-reflection. The usefulness of self-assessment in improving learning has been demonstrated.44 The use of videos to evaluate personal and peer performance, as used in this assessment method, seemed to be effective. Previous studies have shown that videos can truly improve performance.38

The development of a training method is incomplete or prone to criticism if there is no measurement of the reliability of the training procedure. The responses obtained by the ‘augmented’ Maastricht Simulated Patient Assessment instrument were analyzed regarding internal consistency and reliability using Cronbach's alpha and ICC. Results showed good values for both statistics on SP performance, indicating each SP in this group resembled each other.

The assessment of a new training method should also include an investigation of the participants’ acceptance. The questionnaire on the SPs’ perceptions of the standardization process, showed that SPs were satisfied with the training program and recognized the importance of standardized outcomes.33 The two training rounds seemed to be adequate for achieving reliability in SPs performance.

Despite the recognized importance of Maastricht Simulated Patient Assessment, it is our opinion this instrument needs a revision to include a dimension that can be named as “consistency of the information provided”.19 Although using a small size study, the reliability tests were satisfactory for this new Maastricht Simulated Patient Assessment block. We also adapted the instrument to our study by removing the feedback block, following other studies that showed some very specific items in Maastricht Simulated Patient Assessment that were irrelevant to some research institutions and objectives.19 Perera et al., 2015 adapted the Maastricht Simulated Patient Assessment to their study context, while Bouter et al., 2013 proposed a new Maastricht Simulated Patient Assessment-based instrument called Simulated Patient Nijmegen Assessment (NESP), that focused only on feedback.33,45 Due to the nature of the instrument, organized by independent blocks, it was possible to withdraw one and add a new one as a first attempt to expand the Maastricht Simulated Patient Assessment scope.

This study presents several limitations. The risk of bias by the evaluator is inherent in this type of study. However, Figure 1 shows that the bias control measures were taken as training and standardization of the evaluators in step 3; blinding of the evaluators in steps 4 and 5 and, finally, the analysis of internal consistency by Cronbach's alpha and by the intraclass correlation coefficient (ICC). Given the great geographic distance between simulation centers, there may have been details that were not directly controllable by the research team, such as details of the set layout, including personal features (e.g. makeup, clothing) and room organization (e.g. furniture, lighting, and interpersonal distance). The homogeneity of the SPs features such as paralanguage was also not possible to control, although a pilot test was performed in each region. The sample size was quite small for results generalization and additional studies using the procedure in multiple locations should be performed with larger SPs samples and using different clinical situations.


This study developed a feasible method for training simulated patients when multicenter studies are carried out. The procedure was reliable, knowing the equivalency between SPs performance rigor, and took into consideration the cultural differences between SPs from different regions, accounting also for its validity. The study also proposed an instrument development, associated with the missing SPs assessment dimension regarding the consistency of the information provided by the SPs concerning the simulation script, a possible Maastricht Simulated Patient Assessment block subject to a subsequent validation.


We would like to thank the UFSJ Simulation and High-Fidelity Skills Laboratory, the multimedia center of the UFRGS pharmacy college, the UFPI, and UFPA pharmacy school, for space and materials provided for the simulation. To the laboratory technicians for their support. We also thank the collaborators of the research partner universities, Diogo Pilger - UFRGS; Hilris Rocha e Silva - UFPI. Maria Tereza Duenhas Monreal - UFMS. Marcos Valerio Santos da Silva, Patrick Luiz Cruz Sousa - UFPA.


Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES)Financial code 001Federal University of São João Del-Rei


1. Vyas D, Bray BS, Wilson MN. Use of simulation-based teaching methodologies in US colleges and schools of pharmacy. Am J Pharm Educ. 2013;77(3):53. [ Links ]

2. Eukel HN, Frenzel JE, Skoy ET, Focken RL, Fitz AL. An introductory pharmacy practice experience using simulated patient care activities in a pharmaceutical care laboratory environment. Curr Pharm Teach Learn. 2014;6(5):682-669. [ Links ]

3. Smithson J, Bellingan M, Glass B , Mills J. Standardized patients in pharmacy education:An integrative literature review. Curr Pharm Teach Learn.2015;7(6):851-863. [ Links ]

4. Seybert AL. Patient simulation in pharmacy education. Am J Pharm Educ. 2011;75(9):187. [ Links ]

5. Alinier G. Developing high-fidelity health care simulation scenarios:a guide for educators and professionals. Simul Gaming. 2011;42(1) 9-26. [ Links ]

6. Ibrahim IR, Palaian S, Ibrahim MI. Assessment of diarrhea treatment and counseling in community pharmacies in Baghdad, Iraq:A simulated patient study. Pharm Pract (Granada). 2018;16(4):1313. [ Links ]

7. Ibrahim MI, Palaian S, Al-Sulaiti F, El-Shami S. Evaluating community pharmacy practice in Qatar using simulated patient method:acute gastroenteritis management. Pharm Pract (Granada). 2016;14(4):800. [ Links ]

8. Schneider CR, Gudka S, Fleischer L, Clifford RM. The use of a written assessment checklist for the provision of emergency contraception via community pharmacies:a simulated patient study. Pharm Pract (Granada). 2013;11(3):127-131. [ Links ]

9. Elayeh ER, Hammad EA, Tubeileh RH, Basheti IA. Use of secret simulated patient followed by workshop based education to assess and improve inhaler counseling in community pharmacy in Jordan. Pharm Pract (Granada). 2019;17(4):1661. [ Links ]

10. Mobark DM, Al-Tabakha MM, Hasan S. Assessing hormonal contraceptive dispensing and counseling provided by community pharmacists in the United Arab Emirates:a simulated patient study. Pharm Pract (Granada). 2019;17(2):1465. [ Links ]

11. Medical Subject Heading (Mesh). High Fidelity Simulation Training. Available at: (accessed Oct 12, 2020). [ Links ]

12. Roberts F, Cooper K. Effectiveness of high fidelity simulation versus low fidelity simulation on practical/clinical skill development in pre-registration physiotherapy students:a systematic review. JBI Database System Rev Implement Rep. 2019;17(6):1229-1255. [ Links ]

13. Brata C, Schneider CR, Marjadi B, Clifford RM. The provision of advice by pharmacy staff in eastern Indonesian community pharmacies. Pharm Pract (Granada). 2019;17(2):1452. [ Links ]

14. Campbell JL, Carter M, Davey A, Roberts MJ, Elliott MN, Roland M. Accessing primary care:a simulated patient study. Br J Gen Pract. 2013;63(608):e71-e76. [ Links ]

15. Nicholson J, Kalet A, van der Vleuten C, de Bruin A. Understanding medical student evidence-based medicine information seeking in an authentic clinical simulation. J Med Libr Assoc. 2020;108(2):219-228. [ Links ]

16. Schlegel CAB. Simulated and standardized patients in health profession education:the impact of quality improvement. Maastricht:Maastricht University;2016. ISBN:978 94 6159 525 6. [ Links ]

17. MacLean S, Geddes F, Kelly M, Della P. Simulated patient training:Using inter-rater reliability to evaluate simulated patient consistency in nursing education. Nurse Educ Today. 2018;62:85-90. [ Links ]

18. MacLean S, Kelly M, Geddes F, Della P. Use of simulated patients to develop communication skills in nursing education:An integrative review. Nurse Educ Today. 2017;48:90-98. [ Links ]

19. Wind LA, Van Dalen J, Muijtjens AM, Rethans JJ. Assessing simulated patients in an educational setting:the MaSP (Maastricht Assessment of Simulated Patients). Med Educ. 2004;38(1):39-44. [ Links ]

20. Howley L, Szauter K, Perkowski L, Clifton M, McNaughton N;Association of Standardized Patient Educators (ASPE). Quality of standardised patient research reports in the medical education literature:review and recommendations. Med Educ. 2008;42(4):350-358. [ Links ]

21. Instituto Brasileiro de Geografia e Estatística. [City and states]. Available at http: (accessed Jul 27, 2020). [ Links ]

22. Meier RS, Perkowski LC, Wynne CS. A method for training simulated patients. J Med Educ. 1982;57(7):535-540. [ Links ]

23. Wallace P. Coaching standardized patients:for use in the assessment of clinical competence. New York:Springer;2006. ISBN:9780826102249 [ Links ]

24. Furman G. The role of standardized patient and trainer training in quality assurance for a high-stakes clinical skills examination. Kaohsiung J Med Sci. 2008;24(12):651-655. [ Links ]

25. Nestel D, Fleishman C, Bearman M. Preparation:developing scenarios and training for role portrayal. In:Nestel D, Fleishman C, Bearman M, eds. Simulated patient methodology:theory, evidence and practice. Chichester:Wiley-Blackwell;2014. ISBN:9781118761007. [ Links ]

26. Cleland JA, Abe K, Rethans JJ. The use of simulated patients in medical education:AMEE Guide No 42. Med Teach. 2009;31(6):477-486. [ Links ]

27. Lubarsky S, Dory V, Duggan P, Gagnon R, Charlin B. Script concordance testing:from theory to practice:AMEE guide no. 75. Med Teach. 2013;35(3):184-193. [ Links ]

28. INACSL Standards Committee. INACSL Standards of Best Practice:SimulationSM Outcomes and objectives. Clin Simul Nurs. 2016;12(suppl):S13-S15. [ Links ]

29. INACSL Standards Committee. INACSL standards of best practice:SimulationSM Participant evaluation. Clin Simul Nurs. 2016;12(suppl):S26-S29. [ Links ]

30. INACSL Standards Committee. INACSL standards of best practice:SimulationSM Simulation glossary. Clin Simul Nurs. 2016;12(suppl):S39-S47. [ Links ]

31. INACSL Standards Committee. INACSL standards of best practice:SimulationSM Professional integrity. Clin Simul Nurs. 2016;12(suppl):S30-S33. [ Links ]

32. Lewis KL, Bohnert CA, Gammon WL, Hölzer H, Lyman L, Smith C, Thompson TM, Wallace A, Gliva-McConvey G. The Association of Standardized Patient Educators (ASPE) Standards of Best Practice (SOBP). Adv Simul (Lond). 2017;2:10. [ Links ]

33. Perera J, Perera J, Abdullah J, Lee N. Training simulated patients:evaluation of a training approach using self-assessment and peer/tutor feedback to improve performance. BMC Med Educ. 2009;9:37. [ Links ]

34. Matos DAS. [Reliability and agreement among evaluators:applications in the educational area]. Est Aval Educ. 2014;25(59):298-324. [ Links ]

35. Sommer M, Fritz AH, Thrien C, Kursch A, Peters T. Simulated patients in medical education - a survey on the current status in Germany, Austria and Switzerland. GMS J Med Educ. 2019;36(3):Doc27. [ Links ]

36. Veettil SK, Rajiah K. Use of simulation in pharmacy practice and implementation in undergraduate pharmacy curriculum in India. Int J Pharm 2016;8(7):1-5. [ Links ]

37. Peters T, Sommer M, Fritz AH, Kursch A, Thrien C. Minimum standards and development perspectives for the use of simulated patients - a position paper of the committee for simulated patients of the German Association for Medical Education. GMS J Med Educ. 2019;36(3):Doc31. [ Links ]

38. Yang HX, Xu Y, Liang NX, Chen W, Yan XM, Yang P, Guan ZY, Pfeiffer CA, Li Q, Zhao J, Pan H. Standardized patient methodology in mainland China:a nationwide survey. BMC Med Educ. 2019;19(1):214. [ Links ]

39. Cuts in CNPq:uncertainty about future of research in Brazil. (accessed Oct 12, 2020). [ Links ]

40. Motion dificulties in Brazil. (accessed Oct 12, 2020). [ Links ]

41. Oliveira R. [Field trips go into the uncertainty]. accessed Oct 12, 2020). [ Links ]

42. Jug R, Jiang XS, Bean SM. Giving and Receiving Effective Feedback:A Review Article and How-To Guide. Arch Pathol Lab Med. 2019;143(2):244-250. [ Links ]

43. Hammoud MM, Morgan HK, Edwards ME, Lyon JA, White C. Is video review of patient encounters an effective tool for medical student learning? A review of the literature. Adv Med Educ Pract. 2012;3:19-30. Published 2012 Mar 22. [ Links ]

44. Gehringe EF. Self-Assessment to Improve Learning and Evaluation. 2017 ASEE Annual Conference &Exposition. Columbus, OH, Jun 2017. [ Links ]

45. Bouter S, van Weel-Baumgarten E, Bolhuis S. Construction and validation of the Nijmegen Evaluation of the Simulated Patient (NESP):assessing simulated patients' ability to role-play and provide feedback to students. Acad Med. 2013;88(2):253-259. [ Links ]


This study was funded in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES), financial code 001; and by the Graduate Program in Pharmaceutical Sciences of the Federal University of São João Del-Rei through the graduate fellowship (n. 005, dated February 25, 2013, resolution no. 003, dated February 24, 2017, and resolution no. 006, July 9, 2018). The funding did not influence the study design and implementation, neither the collection, management, assessment, and interpretation of data, writing, reviewing, or approving the manuscript; or in the decision to submit the article for publication.

Received: June 24, 2020; Accepted: October 25, 2020


The authors have no conflicts of interest to declare in this paper.

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY-NC-ND 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.