SciELO - Scientific Electronic Library Online

 
vol.20 número3Guía sobre las actividades profesionales confiables índice de autoresíndice de assuntospesquisa de artigos
Home Pagelista alfabética de periódicos  

Serviços Personalizados

Journal

Artigo

Indicadores

Links relacionados

  • Em processo de indexaçãoCitado por Google
  • Não possue artigos similaresSimilares em SciELO
  • Em processo de indexaçãoSimilares em Google

Compartilhar


FEM: Revista de la Fundación Educación Médica

versão On-line ISSN 2014-9840versão impressa ISSN 2014-9832

FEM (Ed. impresa) vol.20 no.3 Barcelona Jun. 2017  Epub 16-Ago-2021

https://dx.doi.org/10.33588/fem.203.895 

EDITORIAL

Evaluación de los procesos formativos de los médicos: ¿son adecuados?, ¿damos feedback?

Evaluation of the learning processes in doctors’ training: are they adequate? Do we provide feedback?

Evaluation of the learning processes in doctors’ training: are they adequate? Do we provide feedback?

Arcadi Gual1  2  3 

1Catedrático de la Facultad de Medicina y Ciencias de la Salud de la Universitat de Barcelona

2Secretario de la Sociedad Española de Educación Médica(SEDEM)

3Patrono de la Fundación Educación Médica FEM)

The fact that our healthcare system is considered to be good and that the professionals working in our healthcare setting are excellent is quite clear. We have faculties of medicine of excellence, we work in modern healthcare institutions and health outcomes must be rated as good. And in the middle of this complex structure there are the health professionals and, more especially, the doctors and nursing staff.

In the field of medical education we are concerned about their training in each of its stages, which once again range from the bachelor’s degree and specialised training to continuing education. And since we unhesitatingly accept that the premises are correct – healthcare is good, health outcomes are good – we come to the conclusion that doctors’ training is also good. I will not question whether the training we provide is good or not, everyone will have their own criteria and their own opinion, but I would ask the reader to reflect on one point: how do we know that bachelor’s degree training, that which we deliver in our faculties of medicine, is good? Is specialist training good? Can we be sure? How do we know? What about continuing education? Is it adequate? How do we know? What I am asking and calling into question is that, we can only say that the different training processes of our doctors are good if we have carried out some kind of evaluation. I’m not talking about assessing students, which is also necessary, but instead, and more especially, about evaluating the training processes themselves and, in other instances, the teaching staff and health outcomes.

Moreover, we should not overlook the fact that the important thing is not to train students or professionals; what we should be concerned about is knowing that they have learnt it. If they have learnt it, fine, and if they haven’t learnt it, then we must correct the mistakes. It is therefore a matter of first evaluating and always giving feedback, and correcting the mistakes, if there are any.

There is one thing that should give us more cause for concern than not doing anything, and that is doing things badly. I mean, if we admonish those responsible for a faculty or healthcare institution because they have not evaluated the training processes, they will almost certainly blush. Yet, if their institution has undergone a programme of evaluation and process improvement, not only will they not blush, but they will also be quick to assure us that they do it well.

Although the assessment of the competences of bachelor’s degree students, of resident physicians and of professionals is a very important topic, the purpose of this editorial is to review the assessment of the training processes at each of the three fundamental levels: bachelor’s degree, specialised training and continuing education.

In the degree course, assessment is apparently well organised. If a faculty is interested in implementing a new degree course, it has to submit a report that includes, among other elements, both the infrastructural facilities and the human resources needed to teach that degree course, as well as, of course, a detailed teaching plan. In a system that tends towards ensuring quality, such as ours, this report is evaluated by ANECA or by the corresponding autonomic agency, which will make a favourable or unfavourable decision or, if necessary, make amendments. Evaluating this report is a good thing to do and is useful. The greatest proof of its usefulness is that the different commissions introduce amendments into the proposals and suggest improvements that are addressed by the proposers. What is not so good is that the only thing that is evaluated is the content set out on paper. If anyone thinks that I am suggesting that the reports contain wilful misrepresentations, they are mistaken. I doubt whether a faculty would attempt to slip a lie into a public document. What I am saying is that papers do not reflect reality. To give a simple example: if a report states that there are 10 large classrooms and 20 small ones, that is sure to be true, but it is just as true that those same classrooms are also used for other degree courses. And the same happens with healthcare resources, with the number of patients or with the human resources or teaching staff. I don’t think anyone is unaware of the fact that hundreds of new teaching activities have been implemented at no extra cost, bearing in mind that the cost of the stress of a lecturer who has doubled his or her number of teaching hours in four years is, of course, zero. Something similar happens with the subsequent evaluation that is carried out when it comes to renewing the accreditation so as to be able to continue to teach the degree course in question. In this evaluation, which is performed on-site, face-to-face, after four, five or six years, what is being assessed is the extent to which the commitments that were acquired and the outcomes of the degree course in question are being fulfilled.

Again, I must insist on the fact that the conclusion is not that the ANECA evaluations are performed badly, what is not right is the system that conducts a detailed evaluation of some aspects and leaves gaping holes in other parts of the teaching processes. Please allow me the following observation: if there were no assessment at all, we would be outraged and calling for one immediately, but if there is, as is the case of the bachelor’s degree, we are all happy and such-and-such a faculty can feel satisfied and say out loud: ‘I’ve been evaluated and got a positive result, of course!’.

In specialised training, although the processes of evaluation are different and the responsibility does not fall on ANECA but on the Ministry of Health itself, we could say that both the strengths and the weaknesses are repeated. The audits and the auditors are not the problem. The problem lies in the structure and, in many cases, in the little capacity to force corrective measures.

In continuing education another fallacy has been introduced so as to put our minds at rest. We have organised 17 offices, one for each autonomous community, to evaluate what Brussels does for the whole of Europe with three clerks and a good computer system. And we use it to evaluate activities, which is fine, but it should be borne in mind that those activities are the tip of the iceberg of the system of continuing education. We haven’t even considered the idea of evaluating the part of the iceberg that remains hidden from view.

These reflections, which someone is very likely to disagree with on some point or other, show that overall we do not assess training processes well, that we often do not know what we are supposed to evaluate, and that the processes for assessing the bachelor’s degree course, specialised training and continuing education are a long way from what they should be. The participation of experts in evaluation is conspicuous by its absence, the efforts and spending on the evaluation of the processes of training doctors are not efficient and of course nobody wants to do it badly, but we fail to realise that we are doing it wrong.

It is possible that our healthcare system is good. It is possible that our professionals are efficient, efficacious, even excellent. But, could they be better? Nobody can know the answer if we don’t evaluate properly and give suitable feedback.

Creative Commons License Este es un articulo publicado en acceso abierto bajo una licencia Creative Commons