My SciELO
Services on Demand
Journal
Article
Indicators
- Cited by SciELO
- Access statistics
Related links
- Cited by Google
- Similars in SciELO
- Similars in Google
Share
Enfermería Global
On-line version ISSN 1695-6141
Enferm. glob. vol.17 n.50 Murcia Apr. 2018 Epub Dec 14, 2020
https://dx.doi.org/10.6018/eglobal.17.2.263041
Originals
Competency assessment impact in quality of learning: Nursing degree learner´s and teacher´s perception
1 Escuela Universitaria de Enfermería Cruz Roja Madrid adscrita a la Universidad Autónoma de Madrid, Madrid, España.
2 Departamento de Enfermería, Facultad de Ciencias de la Salud, Universidad de Alicante, Alicante, España.
Introduction
Competency based education offers the promise of reducing the gap between education and employment and concerns in Higher Education have revolved around how to achieve deep and meaningful learning, a long life learning, so that learning transfer to real situations, complex and changing, would be possible. Conceptualizing assessment from competency based education approach requires assuming its multidimensional character and designing an assessment for learning and not only assessment of learning seeking to improve learning quality. Awareness of the assessment impact on learning requires raising evaluation as a shared process able to be simultaneously cause and effect of learning. In this context, the aim of this study is to determine learners and teachers perceptions about current practices of competence assessment and its impact on the quality of learning.
Methodology
Interpretative descriptive study. Qualitative analysis of data collected through open questionnaires and discussion groups with Degree in Nursing learners and teachers.
Results
Both teachers and learners believe that current assessment practices determine learning skills, but there are big differences in their perceptions. Teachers perceive negatively this impact and claim that for student’s evaluation is only overcoming subjects while students state that assessment influences them positively, guiding their learning and offering improvement opportunities.
Conclusions
teachers perceive difficulties in overcoming evaluation`s traditional, while students perceive its most formative function and demand for it sufficient and quality feedback.
Keywords: assessment impact; competency based education; learning; learners; teachers
INTRODUCTION
Awareness of the impact of assessment on the quality of student learning has been growing, confirming what Elton and Laurillard1 already stated back in 1979: “the quickest way to change student learning is to change the assessment system” (p. 100). Regardless of the assessment modality, type or method used in any educational situation, its impact on the process and product of education is decisive2. Assessment practices in the classroom can both boost and limit student learning3 and competency assessment has been recognised as the most influential factor in what and how students choose to learn, and the quality of their learning outcomes depends on the kind of assessment used4.
This impact of assessment on learning, known as “backwash effect”5,6,7, points out how student learning depends to a large extent on what they think will be assessed8. When students see assessment tasks as low cognitive level requirements - such as memory recall - they tend to reduce their learning to specific facts, like disconnected pieces of information, and to reproduce them when they are being assessed, which leads to surface learning. Conversely, when students perceive that the assessment task requires demonstrating a personal interpretation of the underlying principles, they are more prone to study while actually understanding what they are studying, which is an approach to deep learning. Consequently, regarding learning, the “backwash effect” can be both negative and positive given that the assessment experience influences how students approach learning. This approach is not a stable or final feature. It is instead dynamic and constantly readjusting depending on the context and tasks the student faces9.
Students’ perception of assessment requirements has a significant impact on their learning and, since this impact can be negative or positive, it is necessary to adopt assessment practices which have a positive impact on such learning10. Quality learning requires learners to take an active role, to be able to adopt a strategic and tactical behaviour towards their academic tasks11, a behaviour which is based on reflecting on their own learning and competency development, and not focused on passing assessment tasks. The goal is to prepare the students to learn independently12 through shared-assessment processes capable of creating autonomous learning behaviours and of assessing the transfer of knowledge to the real world, where students need to think critically13.
The purpose of this study is to know how students and teachers perceive the impact that current assessment practices have on learning.
Framework
In the educational context, any assessment must be based on a philosophical and epistemological framework, that is, on the explanation on how the learning process is perceived. Traditionally, there has been a tendency to think that assessment was only the teaching staff’s responsibility. However, based on the new educational paradigm, while learners need the teacher’s assessment, their involvement in the assessment process is also necessary14. The teacher must guide and advise the student, but it is the student who must self-regulate his learning based on their mistakes.
Depending on their main learning style, students prefer different types of assessment15 and, likewise, the assessment method they are presented with is one of the most influential factors when choosing their learning style8,16. That is why teachers must be aware of the type of student learning strategies triggered by certain types of training and assessment.
Teachers bear a heavy responsibility for enhancing certain cognitive skills in deciding how, what, where and when to assess skills and in using data to improve learning17. Students perceive assessment as a powerful motivator able to guide their learning18, which means that the assessment experience may have an impact on their approach to it. This approach to learning itself is not a stable and final feature. It is instead dynamic and constantly readjusting depending on the context and tasks the student faces9. Thus, assessment may persuade students to focus on cognitive skills during the learning process19 in such a manner that, through metacognitive processes boosted by assessment itself, students come closer to deep learning approaches, in line with the holistic vision of competency and, as a consequence, of what is expected of them as future professionals. The role of assessment in this context is to foster higher-order thinking20.
There are many authors who agree that there is a need for an assessment that is also part of the curriculum and that helps improve all kinds of learning21. However, it is clear that there is currently a divergence between assessment concepts on a theoretical level and real practice in the classroom. This divergence is due to the fact that there seem to be significant challenges in the implementation of such an assessment model. These challenges are attributed to different factors such as the mentality change elicited by them, pressure from the social model (capitalist and competitive) or the need for other structural and organizational changes in the education system.
If - as Wiggins22 claimed -, that what is valued gets assessed, in the framework of the European Higher Education Area (EHEA), student assessment must reflect the inherent tendency of teachers to know what and how students learn and how they transfer and use the knowledge they have acquired in real-world problem solving23. In the context of competency-based learning, it is necessary that teachers use assessment methods which are capable of fostering the development of reflection and critical thinking skills and of creating spaces where students can simultaneously acquire and demonstrate the development of higher-order cognitive skills.
Assessment then becomes a shared process24, aimed at improving teaching-learning processes to enable students to further their learning and teachers to improve their teaching practice, which responds to the conceptualization of formative assessment14. Seen in this light, assessment belongs to all and benefits all25. The implementation of this assessment approach has an impact on the teacher-learner interaction and creates a more constructivist and interactive approach to the teaching-learning process.
MATERIAL AND METHOD
The interpretive-descriptive study falls within a qualitative approach, in the search for an internal and holistic perspective of the subject matter based on the assumption that reality is dynamic. It aims to produce descriptive data through people’s spoken and written words26. The epistemology supporting this qualitative orientation in approaching assessment processes is based on the idea that the goal of assessment itself is not to discover universal knowledge but to capture the singularities of particular situations and their characteristics. Thus, the sense given to knowledge becomes the theoretical model guiding the assessment process. This is where the sense and meaning of assessment and education, as the underlying layer, lie.
Participants
The research process is conducted using an intentional, non-probability sample. The participants are first- to fourth-year undergraduate students as well as tenured and collaborating professors of the Nursing Degree programme at the Red Cross Nursing School Madrid, affiliated with the Autonomous University of Madrid (UAM). It is based on a convenience sampling or volunteer sample, as required by the Research Ethics Committee of the UAM for conducting research, since there is a direct relationship between the main researcher and the population under study (teacher-learners). Acknowledging that this type of sampling may not provide the richest sources of information and assuming that qualitative research design is emergent and evolves throughout the process, the research moves on - for the discussion groups - to a theoretical or deliberate sampling strategy based on the information needs identified after analysing the first results.
The number of participants in the open-ended questionnaires was 44 - 30 students and 14 teachers - and the number of participants in discussion groups was 15 - 7 students and 8 teachers. The sample size was determined by the information needs - the guiding criterion for this aspect being data saturation.
Instruments
The first instrument used in this research is the open-ended questionnaire. As a highly structured interview, it can be considered the technique of choice when the researcher has some knowledge about the phenomenon but not enough to give an answer to the questions raised28. The questions have been designed based on the review of the literature on the phenomenon under study. They are posed in a clear and neutral manner and they are aimed at understanding the experience, behaviour, opinion, values feelings and knowledge of the informants about competency-based learning and assessment processes in the EUE_CREM_UAM. (Red Cross Nursing School Madrid_Autonomous University of Madrid). Open-ended questions have been used, which can be answered by the informants based on the knowledge that they have immediately at hand. These are theory-guided questions, with the purpose of making the implicit knowledge of the interviewee more explicit29. Thus, the open-ended questionnaire takes the form of semi-formalised discourse, which aims for the direct, spontaneous and non-mediated expression of the ideas belonging to the participants in the research. The questionnaire questions aimed at exploring the influence of assessment on learning are provided below (Table I).
After analysing the data obtained from the open-ended questionnaires, 2 discussion groups are created in order to produce data which would not be accessible without group interaction, getting the kind of information that is produced in collective mediation30. Since there are two sample groups, the intention has been to create the discussion groups combining the minimal homogeneity and heterogeneity criteria in order to enable the discourse of both groups. In this case, homogeneity - with separate groups for teachers and learners - aims to maintain the symmetric relation between the components of the group. Heterogeneity aims to ensure the necessary differences in the discourse process31. The heterogeneity criteria established for the selection of participants are: students in different years, of different age and gender, different paths into university/school and teachers having different years of professional experience, of different age and gender, different academic background, different relationship with the school and different teaching role.
Procedure
The choice for this research is a hybrid, deductive-inductive categorisation process. In other words, based on a predefined category map, data-gathering instruments are designed accessing the field with the support of a theory, which guides and underpins the research process. Based on these categories, changes are made over time to enable adaptation to the dataset to which they are being applied. Some inductive data are obtained throughout the coding and categorisation process of the meanings stemming from the students’ and teachers’ accounts. These data are interlinked with deductive data in a double hermeneutic process: from theory to practice and from practice to theory. Once the category system has been designed and set, the text codes that can be associated with such categories are identified. The information is organised into large content blocks that facilitate its interpretation and conclusion drawing32. From this point, the units of meaning or subcategories are identified and the deep process of understanding and interpreting the phenomenon under study begins.
The word processor Atlas-ti 6.0 was selected to support content analysis. This software has allowed for a systematic analysis process by building a hypertext structure around the set of documents. This hypertext structure is organised according to concepts and codes, keys and notes, which facilitates the selection of text fragments that are linked by units of meaning or subcategories. This has allowed for a semantically-structured reading33.
The final array of families, categories, and units of meaning obtained from the qualitative analysis of the data related to the family “assessment” is shown in the table below (Table II).
RESULTS
Following are the results obtained for the family or qualitative domain “assessment” and, within it, for the category “impact of assessment on learning”.
Open-ended questionnaires
This category aims to assess the influence of assessment processes on students learning based on their experience, that is, to know how they experience assessment processes and how they perceive its influence on what they learn and how they learn it. Simultaneously, the category gathers the views of teachers on the tools available for students to self-regulate their learning, to own it, based on the assessment processes. The aim is to explore how teachers perceive students carry out this process and what is, in their opinion, the impact of assessment on learning.
The results show clear differences in how teachers and learners perceive the influence of assessment on learning. Students are unanimous in their perception that assessment has a positive influence on learning. While there are many nuances to how they interpret this influence, in general, they claim that this positive influence is due to the fact that assessment allows them to learn from their mistakes and to know what they need to improve. For them, assessment is the moment when they can become aware of what they know and, sometimes, even of how they have learnt it.
“Assessment processes help us become aware of what we really know… Thanks to assessment processes we can improve what we are not good at and thus further our learning.” (CE17) CE= Student questionnaire
“I think they have an impact when it comes to being aware and realistic about what we know and how we have learnt it”. (CE29)
Conversely, opinions among teachers differ depending on whether students have the necessary resources to self-regulate their learning. There is more agreement on the fact that, even when they have the resources to self-regulate their learning, this is not a usual process in students, either because they are not used to it or because their main interest is to pass subjects instead of really learning.
“The tools to design new individual learning paths are available to them; the point is whether they use them. Oftentimes, the final goal is to gradually pass exams and subjects without really realising that all that requires appropriate, long-lasting learning.” (CD1) CD = Teachers questionnaire
There is also a gap between teachers’ and learners’ perceptions regarding this qualifying approach to assessment and its impact on learning. Teachers think that students want to pass subjects without realising that this approach is detrimental to their learning. Thus, when their outcomes are not good, they change their strategy towards the assessment task, but not their learning strategy. Conversely, students demonstrate being fully aware of this and they claim this should not happen.
“I think when a student gets a negative outcome during an assessment, what they generally do is spending some more time preparing for the subject being assessed, but they do not consider changing or modifying the structure of their learning systems”. (CD1)
“I do think they reshape strategies, but only study strategies... The goal remains the same: to pass.” (CD13)
“I think learning assessment is necessary, not so much for obtaining a numerical score, but as feedback on where each of us stands in terms of internalising and assimilating the theoretical basis as practical.” (CE21)
“In my opinion, assessment processes should have no influence, because in this trade we must study things in order to know them, not to pass exams...” (CE16)
To avoid the negative impact of assessment, students demand certain characteristics from it, the key being a participatory and reflective assessment. Feedback is a key element and students think the feedback they receive in the context of practical training is the feedback that really guides their learning:
“[...] as long as it involves participation and reflection on our part, I think it has a positive impact because they help broaden the learning field by being aware of and analysing both what is right and what we have failed at.” (CE24)
“... the fact that nurses are constantly correcting what we do wrong at the hospital, or what we don’t do completely right, makes us realise what our weaknesses are in dealing with the patient and in the techniques we use. (...) This is why I think that competency assessment makes us the most aware of our achievement and challenges during the practice sessions.” (CE29)
The data seem to show a gap or contradiction between teachers’ and learners’ perceptions. Thus, according to the teachers’ perception, students are not aware that they can self-regulate their learning based on the assessment processes, regardless of whether or not they have the necessary tools for that; or, if they are aware, this is not of interest to them, since they are more interested in passing subjects than in learning itself. Conversely, according to the students’ perception, the data confirm the awareness of the impact of assessment on learning, and how it is broadly seen as a positive one due to the feedback that assessment provides on what has been learnt.
Discussion groups
Teachers think that assessment influences learning, but there are no references to the fact that the assessment method used influences learning strategies. Their primary concern is the size of the assessment groups they have to work with.
Among students, however, there is broad recognition that the method that will be used for their assessment totally determines the way they approach their learning.
Assessment shapes learning
In the teachers’ discussion group, when the question arises as to whether what is assessed and how it is assessed determines what students learn, the answer is unequivocally yes. The students say it clearly too - they study differently depending on the assessment method they are presented with:
“[…] I think the way we study is indeed determined not so much by the teacher but by how they assess. I don’t think a multiple-choice test is the same. You don’t prepare for it as you would for a constructed-response test, or an oral presentation, I don’t think you prepare in the same way.” (E4) E= Student
“What she has said about the multiple-choice test, I think it’s true…, I have sat written exams and all, and I prepare differently for a multiple-choice test for instance.” […] “I don’t know how to put it but, in multiple-choice tests, my knowledge is more superficial, not as deep as if I had to write it. I have a more… yes, a more superficial understanding, I don’t know.” (E3)
Learning strategies depending on the assessment method
Teachers naturally accept that when students know how teachers assess, they put much less effort into assessment. They would be somehow adopting strategies to pass, depending on the assessment method, and not strategies to learn:
“[…] if the student is smart and knows how our school’s assessment system works, if they were smart, they wouldn’t even study for some teachers.” (PT4) PT= Tenured professor
Students say this happens because the requirements for the different ways in which they are assessed are not the same:
“I think that, with regard to that, you don’t need to understand the reason for something in a multiple-choice test. But in order to elaborate on a topic while making sense, you do have to understand it, either while you were studying it or when somebody was explaining it to you. So it is true that, in my opinion, you don’t need to prepare so much for a multiple-choice test, or at least I don’t prepare as much as I would for an essay test. And then, on the other hand, ongoing evaluation assignments, which may trigger new doubts and make you do research on other topics.” (E2)
Furthermore, when students know the teacher and how him/her behaves in the classroom, they are able to guess (thus confirming the teachers’ suspicion) what will be required from them during an assessment and how this will be required (no matter the type of assessment):
“Even if they don’t tell you, but what they emphasise the most while teaching and stuff, you kind of guess what you need to study more or less.” [...] “And when you have had the same teacher for the second subject, you kind of already know him/her a little bit, and you somewhat know how the teacher is.” (E2)
Size of the assessment groups
This unit of meaning is due to the high number of students and consequently the large size of the groups, which sometimes leads to group assessment on the grounds of resources, resulting in a negative impact on learning.
“We work with huge groups of students. If I have five students, I will give you the best five nurses in the world; if I have a hundred, I will give you a hundred half-nurses, and that’s the main issue.” (PT4)
Students think that the large size of the groups has a negative impact on assessment because it hinders or prevents feedback - an element they deem essential for assessment to have a positive impact on learning. They explain that the lack of feedback in continuous assessment processes is due to the high number of students per teacher. This challenge does not exist in the clinical context, where the ratio is nearly always 1:1, while in summative evaluation processes, they link it to the limited possibilities offered by multiple-choice tests:
“I do agree that it is true that oftentimes the volume of students makes individual feedback difficult. But, for instance, when groups are created in continuous assessment..., even if sometimes it is not enough..., I would welcome having feedback.” (E5)
“I think it’s not easy for a teacher to give every student in the classroom the feedback that you can get from a practice session when you supposedly have a nurse for yourself.” (E7)
DISCUSSION
Within the educational context, assessment can be seen as a critical incident, a source of tension for learners and teachers, both because of its influence on what is learnt and how it is learnt and because of its selective and accrediting role34. It is also seen as one of the main challenges in competency-based education, given the complexity itself of the concept and the high demands of this training approach35.
In general, both the concern of the students and teachers in this study and the usual lines of research run through two essential aspects of assessment - on the one hand, its ability to certify or prove the competency, and on the other hand, its impact on learning.
The former has to do with the traditional roles of assessment - where the predominant role of assessment is accrediting learning - based on an approach to assessment focused on qualification, especially among students, in a very quantitative way, which is one of the main concerns among teachers.
The latter is related to the impact of assessment on learning. The students who participated in the research openly claim how they change their approach to study depending on the type of assessment used. The assessment design then has the capacity to limit what and how students learn, limiting the knowledge, capacities, and attitudes at stake to those that will be required during assessment tasks10.
Recent literature like the Australian project “Assessment 2020”, put forward propositions for assessment reform based on the assumption that assessment is a central feature of teaching and the curriculum and that it powerfully influences how students learn from their experience of higher education and what they gain from it. Therefore, it is reasonable and necessary to focus the debate and the efforts on improving assessment practices, since they have such a significant impact on the quality of learning36. In the United Kingdom, the Higher Education Academy reinforces - through the project “Transforming Assessment”37- the idea that assessment of student learning is a fundamental function of higher education and has a vital impact on student behaviour, teacher time, university reputation, and most of all, students’ future lives.
It is on the basis of the awareness of this impact that the students and teachers who participated in our study demand in their discourse an assessment capable of supporting a reflection-based learning process. This demand is particularly obvious among students who demand teachers to retrieve the formative value of assessment. Contrary to teacher perception, students have claimed - in other research works - to feel dissatisfied with the use of assessment strategies that favour rote learning38.
Beetham39 proposes that when the value of assessment is focused on accurate reproduction, learners are given opportunities to practise the required concept or skill until they can reproduce it exactly as taught. When the value of assessment resides in internalisation, learners are given opportunities to integrate a concept or strategy with their pre-existing capabilities and thoughts, thus giving them the opportunity to reflect on what it means to them and to make sense of it in different ways. These statements coincide with what the teachers in our study say - they explain how, oftentimes, students develop strategies to succeed at assessment rather than learning strategies. This is confirmed by the students, who claim this happens especially when assessment requirements are not too demanding and can be met by memory recall and by adapting to the teacher’s style and demands.
There are two factors of assessment practices that the students and teachers who have participated in this research identify and point out as having a high impact on learning: the size of assessment groups and the quantity and quality of the feedback received during the assessment process. Furthermore, these two aspects are interlinked - students repeatedly claim that the size of the groups has a negative impact on the quantity and quality of the feedback received during assessment activities, which results in a negative impact on learning.
Based on teacher feedback, students determine the reason and goal for their learning, where they stand regarding the criteria established as indicators of quality work, they bring up questions and find ways to achieve the proposed goals. The point is to empower learners through reflection. Any other approach - focused on students as individuals who may or may not achieve the outcomes, as a means to obtain high grades and not as an opportunity to learn - will be focused on qualification, not on learning.
Formative assessment - demanded by students - is rich in feedback, it is formative and informative and its main focus is on offering the necessary guidance to enable improvement in student work40. Useful and constructive feedback empowers students and provides them with the tools to become autonomous learners41. Feedback is essential to increase meaningful learning and see positive effects on the development of students’ professional competencies42.
CONCLUSIONS
While teachers perceive challenges in overcoming the traditional approach to assessment, really focused on its qualifying function, students perceive its more educational function and demand sufficient and quality feedback from it. Both learners and teachers see the quantity and quality of such feedback and the size of the groups as assessment elements having a high impact on the quality of learning.
The impact of assessment practices on learning is closely related to focus given to assessment and the underlying learning model. Focusing and reducing assessment to mere qualification, by both teachers and learners, has a negative impact on learning. In order to reverse this impact and achieve a positive impact of assessment on learning, in line with the EHEA requirements, assessment must be a shared process capable of giving the student the opportunity to become aware of what they learn and how they learn it. It must be based on participation and reflection, filled with (sufficient and quality) feedback and designed to demand the implementation of complex, higher-order cognitive processes, and not the reproduction of knowledge and behaviours.
REFERENCIAS
1. Elton L, Laurillard D. Trends in research on student learning. Stud High Educ. 1979; 4(1):87-102. [ Links ]
2. De la Orden A. La función optimizante de la evaluación de programas educativos. RIE Rev invest educ. 2000; 18(2):381-389. Disponible en: http://revistas.um.es/rie/article/view/121051/113741 [ Links ]
3. Wiliam D. What is assessment for learning? Studies in Educational Evaluation. 2011; 37(1):3-14. [ Links ]
4. Doody O, Condon M. Increasing student involvement and learning through using debate as an assessment. Nurs educ pract. 2012; 12(4): 232-237. [ Links ]
5. Alderson JC, Wall D. (1993). Does washback exist? Appl Linguist. 1993; 14(2):115-129. [ Links ]
6. Bailey KM. Working for washback: A review of the washback concept in language testing. Lang. test. 1996; 13(3): 257-79. [ Links ]
7. Cheng L, Curtis A. Washback or backwash: A review of the impact of testing on teaching and learning. Washback in language testing: Research contexts and methods. 2004:3-17. [ Links ]
8. Biggs J. Teaching for Quality Learning at University. Philadelphia: The Society for Research into Higher Education & Open University Press; 2003. [ Links ]
9. Struyven K, Dochy F, Janssens S. Students' perceptions about evaluation and assessment in higher education: a review. Assess Eval High Educ. 2005; 30(4):325-341. [ Links ]
10. Tiwari A, Lam D, Yuen KH, Chan R. Fung T, Chan, S. Student learning in clinical nursing education: Perceptions of the relationship between assessment and learning. Nurse Educ Today. 2005; 25(4):299-308. [ Links ]
11. Hernández F, Rosario PJ, Cuesta DJ. Impacto de un programa de autorregulación del aprendizaje de estudiantes de Grado. Rev Educ. 2010; (353): 571-588. [ Links ]
12. Ramsden P. Prosser M, Trigwell K, Martin E. University teachers' experiences of academic leadership and their approaches to teaching. Learn Instr. 2007; 17(2):140-155. [ Links ]
13. Silva E. Measuring skills for 21st-century learning. Phi Delta Kappan. 2009; 90(9):630-634. [ Links ]
14. López-Pastor VM. Evaluación formativa y compartida en la universidad: clarificación de conceptos y propuestas de intervención desde la Red Interuniversitaria de Evaluación Formativa. Psychol soc educ. 2012; 4(1):117-130. [ Links ]
15. Argos J, Ezquerra P, Osoro JM, Salvador L, Castro A. La evaluación de los aprendizajes de los estudiantes en el marco del Espacio Europeo de Educación Superior (EEES): sus prácticas, preferencias y evolución. Eur j investig health psychol educ. 2015; 3(3):181-194. [ Links ]
16. Trigwell K, Prosser, M. Improving the quality of student learning: the influence of learning context and student approaches to learning on learning outcomes. Higher education. 1991; 22(3): 251-266. [ Links ]
17. Wehlburg CM. A scholarly approach to assessing learning. International Journal for the Scholarship of Teaching and Learning. 2011; 5(2):2. [ Links ]
18. Drew S. Student perceptions of what helps them learn and develop in higher education. Teaching in Higher Education. 2001; 6(3):309-331. [ Links ]
19. Barak M, Dori YJ. Enhancing higher order thinking skills among in-service science teachers via embedded assessment. J Sci Teacher Educ. 2009; 20(5):459-474. [ Links ]
20. Barnett JE, Francis AL. Using higher order thinking questions to foster critical thinking: a classroom study. Educ Psychol. 2012; 32(2):201-211. [ Links ]
21. Casanova MA. Evaluación: Concepto, tipología y objetivos. En: Casanova MA. Manual de evolución educativa. Aula abierta, 1999. p. Disponible en: http://148.208.122.79/mcpd/descargas/Materiales_de_apoyo_3/evaluacion-tipologia_casanova.pdf [ Links ]
22. Wiggins GP. Assessing student performance: Exploring the purpose and limits of testing. San Francisco: Jossey-Bass Publishers; 1993. [ Links ]
23. Janisch C, Liu X, Akrofi A. Implementing alternative assessment: opportunities and obstacles. Educ Forum. 2007; 71:221-230. Disponible en: http://files.eric.ed.gov/fulltext/EJ763213.pdf [ Links ]
24. Hamodi C, López Pastor VM, López Pastor AT. Medios, técnicas e instrumentos de evaluación formativa y compartida del aprendizaje en educación superior. Perf Educ. 2015; 37(147):146-161. [ Links ]
25. Santos Guerra MA. La escuela que aprende. Madrid, Morata; 2000. [ Links ]
26. Taylor SJ, Bogdan, R. Introducción a los métodos cualitativos de investigación: la búsqueda de significados. Barcelona: Editorial Paidós; 1987. [ Links ]
27. Álvarez JM. Evaluar para conocer, examinar para excluir. Madrid: Morata; 2001. [ Links ]
28. Fernandez Lasquetty B. Introducción a la investigación en enfermería. Madrid: Editorial DAE; 2013. [ Links ]
29. Flick U. Introducción a la investigación cualitativa. Madrid: Ediciones Morata; 2012. [ Links ]
30. Goig RL. Grupos de discusión. Madrid: Esic Editorial; 2004. [ Links ]
31. Pedraz A, Zarzo J, Ramasco M, Palmar AM. Investigación cualitativa. Barcelona: Elsevier; 2014. [ Links ]
32. Bozu, Z. La carpeta docente como práctica formativa y de desarrollo profesional del profesorado universitario novel. Un estudio de casos. [Tesis doctoral]. Barcelona: Universidad de Barcelona; 2008. [ Links ]
33. Solano MDC. Vivencias de las personas que han padecido un IAM tras un año de evolución. [Tesis Doctoral]. Alicante: Universidad de Alicante; 2007. [ Links ]
34. Aguayo-González M, Castelló-Badía M, Monereo-Font C. Incidentes críticos em docentes de enfermagem: descobrindo uma nova identidade. Rev Bras Enferm. 2015; 68(2):219-227. Disponible en: http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0034-71672015000200219&lng=es&nrm=iso&tlng=en [ Links ]
35. De la Orden A. Reflexiones en torno a las competencias como objeto de evaluación en el ámbito educativo. Revista electrónica de investigación educativa. 2011; 13(2), 1-21. Disponible en: http://redie.uabc.mx/redie/article/viewFile/278/442 [ Links ]
36. Boud D. Assessment 2020: seven propositions for assessment reform in higher education. Sydney: Australian Learning and Teaching Council; 2010. [ Links ]
37. HEA. The Higher Education Academy. A Marked Improvement: transforming assessment in higher education. York: Higher Education Academy; 2012. Disponible en: https://www.heacademy.ac.uk/sites/default/files/a_marked_improvement.pdf [ Links ]
38. Pinilla A, Barrera MP, Soto H, Parra MO, Rojas E, Granados LA. ¿Cómo perciben los estudiantes de pregrado de la Facultad de Medicina de la Universidad Nacional de Colombia su proceso de evaluación académica? Rev Fac Med. 2004; 52(2):98-114. Disponible en: http://168.176.5.108/index.php/revfacmed/article/viewFile/43327/44624 [ Links ]
39. Beetham H. Active learning in Technology-Rich Contexts. En: Beetham H, Sharpe R. Rethinking Pedagogy for a Digital age: designing for 21st Century learning, Abingdon: Routledge; 2010. [ Links ]
40. Brown S. Perspectivas internacionales sobre la práctica de la evaluación en Educación Superior. RELIEVE-Revista Electrónica de Investigación y Evaluación Educativa. 2015; 21(1). Disponible en: http://www.uv.es/RELIEVE/v21n1/RELIEVEv21n1_ME7.pdf [ Links ]
41. Skenderis T, Laskaridou C. Feedback: A Basic Ingredient. Online Submission. 2010; 19:74-80. [ Links ]
42. Pecina Leyva RM. Impacto de la educación basada en competencias en el aprendizaje de alumnos de octavo semestre de licenciatura en enfermería en una universidad pública. Revista Iberoamericana para la Investigación y el Desarrollo Educativo. 2013; (10). Disponible en: http://ride.org.mx/1-11/index.php/RIDESECUNDARIO/article/viewFile/141/136 [ Links ]
Received: July 08, 2016; Accepted: September 05, 2016