Mi SciELO
Servicios Personalizados
Revista
Articulo
Indicadores
- Citado por SciELO
- Accesos
Links relacionados
- Citado por Google
- Similares en SciELO
- Similares en Google
Compartir
Anales de Psicología
versión On-line ISSN 1695-2294versión impresa ISSN 0212-9728
Anal. Psicol. vol.33 no.2 Murcia may. 2017
https://dx.doi.org/10.6018/analesps.33.2.256711
Factor structure and stability of a quality questionnaire within a postgraduate program
Estructura factorial y estabilidad de un cuestionario sobre la calidad de un programa de postgrado
Javier M. Moguerza1, Juan José Fernández-Muñoz1, Andrés Redchuk1, Clara Cardone-Riportella2 and Esperanza Navarro-Pardo3
1 Rey Juan Carlos University, Madrid (Spain)
2 Pablo Olavide University, Seville (Spain)
3 University of Valencia, Valencia (Spain)
This work has been partially funded by the following projects: Projects GROMA (MTM2015-63710-P), PPI (RTC-2015-3580-7) and UNIKO (RTC-2015-3521-7) funded by the Ministry of Economy and Competitiveness (Spain); and project SEJ-141 funded by the Regional Government of Andalucía (Spain); and the "methaodos.org" research group at University Rey Juan Carlos.
ABSTRACT
In this work we describe an instrument based on the use of a factor analysis technique in order to measure the quality of education within a Postgraduate degree offered by a public Spanish university. We showed that the instrument has satisfactory psychometric properties (reliability and validity). Regarding the factorial solution, three main dimensions have been determined, namely: importance given to the subject; educational resources and knowledge of the subject (previous and posterior). It is important to remark that these three dimensions were consistently detected in all the factorial analyses performed (total sample and separate academic years). These three dimensions should be considered as fundamental when designing an instrument to evaluate educational quality. These findings may be taken as a basis for the design of future strategies for the evaluation of educational quality on other type of degrees within the higher education area.
Key words: academic quality, factor analysis, reliability, educational resources and validity.
RESUMEN
En este trabajo se describe un instrumento basado en el uso de una técnica de análisis factorial con el fin de medir la calidad de la educación a través de una muestra de estudiantes de postgrado de una universidad pública española. El instrumento tiene unas aceptables propiedades psicométricas (fiabilidad y validez). En cuanto a la solución factorial, tres dimensiones principales se han determinado: la importancia dada a la materia; recursos educativos y conocimiento de la materia (anterior y posterior). Es importante destacar que estas tres dimensiones se han detectado consistentemente en todo el análisis factorial: muestra total y cursos separados. Estas tres dimensiones deben ser consideradas como aspectos fundamentales en el diseño de un instrumento para evaluar la calidad educativa. Estos hallazgos pueden ser tomados como base para el diseño de estrategias futuras para la evaluación de la calidad educativa en otro tipo de estudios dentro del área de la educación superior.
Palabras clave: calidad educativa, análisis factorial, fiabilidad, recursos educativos y validez.
Introduction
The Quality Measurement of Academic research within higher education is nowadays a very important topic. Higher education is passing through a period of re-organization and re-establishment of new principles. According to the Bologna Process, the European Higher Education Area (EHEA) was created to make academic degree standards and quality assurance standards more comparable and compatible throughout Europe.
Regarding to university education, the Bologna process introduced in Spain (and in many European countries) the concept of Official Master Program in the national academic structure. Therefore, the attention being devoted to the measurement and evaluation of the quality of postgraduate programs and particularly of Masters Programs is quite a new phenomenon for Spanish authorities. Up to the date, each institution usually designs its own questionnaire and, as a consequence, only internal evaluations are carried out. In this sense, students' satisfaction with these programs has been studied for other academic systems and these results should be taken into account (Dubas, Ghani & Strong, 1998; Marks, 2001; Martin & Bray, 1997). As a result, generic Masters Programs and more specialized programs are growing in the higher education market and quality evaluation has become increasingly important (Cecchini, González-Pienda, Méndez-Giménez, Fernández-Río, Fernández-Losa & González, 2014; Lado, Cardone & Rivera, 2003).
To this aim, in the previous literature several statistical methods have been used namely: i) factor analysis techniques for analysing the motivations of university students (Juric, Tood & Henry, 1997); ii) cluster analysis to analyse student profiles (Stafford, 1994); iii) multidimensional scaling for evaluating performance in a faculty (Herche & Swenson, 1991); iv) conjoint analysis to design the course offering (Dubas & Strong, 1993) or v) even the study of repositioning of universities and their Masters programs (Goldgehn & Kane, 1997). The current work fits in the first group of references that is the use of factor analysis techniques for the design of consistent instrument within the educational quality topics. A review on statistical quality tools can be consulted in Ehling & Körner (2007).
Academic quality is generally analysed using periodic surveys as an assessment instrument, being systematically this methodology used by 98% of universities and 99% of business schools in the United States (McKeachie, 1997; Moreno & Rios, 1998; Simpson & Siguaw, 2000).These authors report that teachers have perceived certain weaknesses in the surveys, for instance they should be unaffected by variables hypothesis as potential biases and have developed different practices to influence these evaluations. A review of the most widely used questionnaires within these areas can be found in Guolla (1999) and Marsh (1994). Regarding the relation among questionnaires design for educational purposes and structural equation modelling using factor analysis, and interesting study is the one by (Schreiber, Nora, Stage, Barlow, & King, 2006).
In this paper, we propose an instrument for academic quality measurement focused on three main dimensions: educational resources, importance given to the subject and knowledge of the subject (previous and posterior). Each dimension has been expanded through a number of items that a) will be described throughout the paper. The instrument is based on the use of a factor analysis technique (Mardia, Kent & Bibby, 1979). There are some works in the literature b) using this technique for educational purposes, for instance (O'Neil & Abedi, 1996; Pohlmann, 2004).
This paper is organized as follows: section 2 presents the methodology, including the sample characteristics, the sampling procedure, a description of the instrument structure and finally the statistical tool used; section 3 describes the application of the methodology and the main results obtained. Finally, section 4 presents the main conclusions and discusses future research.
Method
Participants
The analysis is focused on a sample of students enrolled in a Master of Business Administration offered by a public Spanish University with two language versions: Spanish and English. Ages from student range from 23 to 35 being the average 25.7 (SD = 2.3).
Procedure
Data come from a survey carried out for five academic courses. Students have been asked to evaluate three types of courses: 10.7% courses come from qualitative topics, 54.3% from quantitative topics and 35.5% from topics whose knowledge includes both perspectives: qualitative and quantitative. The academic years and percentages of the sample corresponding to each academic year are: the first one (30.3%), the second one (23.6%), the third one (12.2%), the fourth one (15.5%) and the fifth one (18.4%). Courses are distributed into academic years and within each academic year into terms (Term 1, Term 2 and Term 3).Each student enrolled in any course has answered a questionnaire. A total of 5769 valid questionnaires were obtained. Table 1 summarizes the distribution of the evolution of the received questionnaire per academic year and term.
Instruments
The questionnaire was composed by 8 items distributed into three dimensions: a) importance given to the subject; b) educational resources and c) knowledge of the subject (previous and posterior):
a) The first dimension includes two items: P1 evaluates the students' interest in the subject; and P2 refers to the integration degree of the course in the Master.
b) The second dimension is made up of four items, namely: P3 evaluates the clarity of the teacher's explanations; P4 evaluates punctuality of the teacher; P5 refers to promotion of participation in class by the teacher; and P6 evaluates the utility and interest of the reading and recommended bibliography.
c) Finally, the third dimension is composed of two items: P7 evaluates output level reaches in the course and P8 evaluates the input level previous to the course.
All questions were assessed on a 5-point Likert scale, ranging from (strongly disagree) and 5 (strongly agree).
Analysis
The data have been analyzed using the R statistical software. In particular, we have used the "psych" package. The statistical software R and the "psych" package1 include a library specially devoted for personality and psychological research. Within this library we have used the factor analysis functions using the standard Promax transformation. The number of factors has been determined through a hypothesis test and also using parallel analysis (Horn, 1965; Lloret-Segura, Ferreres-Traver, Hernández-Baeza, & Tomás-Marco, 2014). This process has been repeated for each academic course under study in order to determine the temporal stability of the scale and its factor structure.
After determining the factorial solution, we selected and smaller subsample randomly (50%) to apply the Exploratory Factor Analysis (Ferrando, & Lorenzo-Seva, 2014) and identify the factorial solution; after that we have proceeded with a confirmatory factor analysis (CFA), accompanied by the goodness of fit indices. Confirmation of the adequacy of the model have been stablished from the following indices: the chi-square statistic X2 (Jöreskog & Sörbom, 1979; Saris & Stronkhorst, 1984); the goodness of fit index (GFI), considered to be acceptable when its value is over .90 (Bentler, 1990); the root mean square residual (RMSR) and the error of the root mean square approximation (RMSEA), both considered acceptable when taking values under .08 (Jöreskog & Sörbom, 1979; Steiger & Lind, 1980); and the incremental fit indices, namely the comparative fit index (CFI), the normed fit index (NFI), also called delta 1 and the incremental fit index (IFI), ranging these three indices from 0 to 1 and being considered acceptable over .90 (Bentler, 1990).
Results
Table 2 shows the descriptive statistics (mean and standard deviation) and the skewness and kurtosis index of the an swers. In this sense, the skewness and kurtosis indices were acceptable ranging the skewness standard error from .059 to .092 and the kurtosis standard error from .117 to .184.
Table 3 summarizes the factorial structure of the instrument, the inter-correlations between factors and the cumulative variances for each academic course. For each academic course, the factorial solution leads to three factors. It is important to remark that the factorial structure is exactly the same for each academic course analyzed. This shows that the instrument is temporally stable, that is, the instrument applied on different data and different time windows leads to the same factorial solution. As a result, the first factor is composed by item 1 and 2; the second factor is made up of items 3, 4, 5 and 6; and finally the third factor includes items 7 and 8. Thus, the exploratory factorial analysis confirms the previous conceptual structure of the questionnaire. In summary, after analyzing the items within each factor, three main dimensions arise: the first factor describes the importance given by students to the subject; the second factor is referred to the educational resources available; and the third factor is related to the overall knowledge of the subject.
Regarding reliability, the Cronbach's alpha takes the value .93. For the different academic courses analysed, the Cronbach's alpha ranges from .892 to .955, that is, similar values are obtained for all the courses. Since all the values are over the .80 threshold, the reliability indices obtained can be interpreted as excellent and, in addition, coherent with the internal consistency of the instrument.
In order to test the exploratory factor solution, we have applied a confirmatory factor analysis for each academic year and for the whole set of data. All the goodness-of fit indices achieved nearly optimal values for each course. For the sake of space, we describe the results for the confirmatory factor analysis corresponding to the total sample: X2 = 199.72; X2/degree freedom = 11.74; GFI = .99; AGFI = .98; NFI = .98; CFI = .98; RMR = .02 and RMSEA = .04. The results are within the optimal values required for each goodness of fit index as describe in the analyses section. The complete results for the total sample and each academic course can be consulted in Table 4.
Conclusion
In this study we have described an instrument to measure the quality of education within a Master of Business Administration (MBA) offered by a public Spanish university. The instrument has shown adequate psychometric properties (reliability and validity). Regarding reliability, data have been analysed separately for each academic course and as a whole. In this sense, the internal consistency of the instrument was adequate according to the values of the consistency indices considered.
According to the factorial solution, three main dimensions have been determined, namely: importance given to the subject, educational resources and knowledge of the subject (previous and posterior). It is important to remark that these three dimensions were consistently detected in all the factorial analyses performed (total sample and separate academic years). These three dimensions should be considered as fundamental when designing an instrument to evaluate educational quality. In the particular case at hand, each dimension is made up of specific items, which are reflected in the questionnaire. These findings allow the design of future strategies for the evaluation of educational quality, devoted to achieve an overall improvement of instruments used to evaluate the perception that students have regarding the described dimensions. In the case that we have analysed these three dimensions should be the core of questionnaires for the analysis of postgraduate studies. Students applying for this kind of degrees are looking for specialized topics ("importance given to the subject"), institutions with facilities and resources ("educational resources"), and an improvement in their a priori knowledge of subjects ("knowledge of the subject: previous and posterior").
One of the main goals of the proposed methodology may be the temporal validation of questionnaires. A well designed questionnaire should be stable in time, that is, different samples in different periods should allow the evaluation of the data under similar criteria. In the case at hand, we have shown that the analysis of several data using the same instrument provides equivalent factor structures (i.e. the same decisional dimension).
In spite of the results obtained, the instrument has several limitations. First of all, we have analysed the psychometric properties of this instrument. In this regard, some additional analysis should be performed in order to confirm that the dimensions detected really include all the possible factors that explain educational quality, or whether some other dimensions should be included. Secondly, the method has been applied over a Spanish sample of postgraduate students. We should apply the same methodology to check this factorial structure over some other post-graduate programs or even on different type of studies (degrees, bachelors). It is important to remark that the particular content structure of each subject (qualitative or quantitative) may influence the results. In this sense, we recommend the future application of this instrument considering other settings and topics.
1 For more information see: http://www.r-project.org
References
1. Bentler, P.M. (1990). Comparative fit indices in structural models. Psychological Bulletin, 107, 238-246. [ Links ]
2. Dubas, K. & Strong, J. (1993). Course Design Using Conjoint Analysis Journal of Marketing Education, 15, 31-36. [ Links ]
3. Dubas, K., Ghani, W., Davis, S. & Strong, J. (1998). Evaluating Market Orientation of an Executive MBA Program. Journal of Marketing for Higher Education, 8(4), 49-59. [ Links ]
4. Ehling, M & Körner, T. (2007). Handbook on Data Quality Assessment Methods and Tools. Eurostat, European Commission, Wiesbaden. [ Links ]
5. Ferrando, J.P. & Lorenzo-Seva, U. (2014). Exploratory Item Factor Analysis: Some additional considerations. Anales de Psicología, 30(3), 1170-1175. [ Links ]
6. Goldghen, L. & Kane, K. (1997). Repositioning the MBA, Issues and Implications. Journal of Marketingfor Higher Education, 8(1), 15-24. [ Links ]
7. Guolla, M. (1999). Assessing the teaching quality to student satisfaction relationship: Applied customer satisfaction research in the classroom. Journal of Marketing Theory and Practice, 7(3), 87-98. [ Links ]
8. Herche, J. & Swenson, M. (1991). Multidimensional Scaling: A Market Research Tool to Evaluate Faculty Performance in the Classroom. Journal of Marketing Education, 13, 14-20. [ Links ]
9. Horn, J.L. (1965). A Rationale and Test for the Number of Factors in Factor Analysis. Psychometrika, 30, 179-85. [ Links ]
10. Jöreskog, K.G. & Sörbom, D. (1979). Advanced in factor analysis and structural equation models. Cambridge: M.A. Abl. [ Links ]
11. Juric, B., Tood S. & Henry J. (1997). From the Student Perspective: Why Enrol in an Introductory Marketing Course? Journal of Marketing Education, 19(1), 65-76. [ Links ]
12. Lado, N., Cardone, C. & Rivera, P. (2003). Measurement and Effects of Teaching Quality: An Empirical Model Applied to Masters Programs. Journal of the Academy of Business Education, 4, 28-40. [ Links ]
13. Lloret-Segura, S, Ferreres-Traver, A., Hernández-Baeza, A & Tomás-Marco, I. (2014). Exploratory item factor analysis: a practical guide revised and up-dates. Anales de Psicología, 30(3), 1151-1169. [ Links ]
14. Mardia, K., Kent, J. & Bibby, J. (1979). Multivariate analysis. Academic Press: London. [ Links ]
15. Marks, R.B. (2001). Determinants of Student Evaluations of Global Measures of Instructor and Course Value. Journal of Marketing Education, 22(2), 108-119. [ Links ]
16. Marsh, H. (1994). Weighting for the Right Criteria in Instructional Development and Effective Assessment (IDEA) System: Global and Specific Ratings of teaching Effectiveness and their Relation to Course Objectives. American Psychologist, 86(4), 631-648. [ Links ]
17. Martin, G. & Bray, G. (1997). Assessing Customer Satisfaction with a Master of Business Administration Program: Implications for Resource Allocation. Journal of Marketingfor Higher Education, 8(2), 15-28. [ Links ]
18. Mckeachie, W. (1997). Student Rating: The Validity of Use. American Psychologist, 52, 1218-25. [ Links ]
19. Moreno, A. & Ríos, D. (1998). Issues in Service Quality Modelling. In Bernardo J.M., et al. (Edit.). Bayesian Statistics, 6, 441-457. [ Links ]
20. O'Neil, H.F. Jr. & Abedi, J. (1996). Reliability and Validity of a State Metacognitive Inventory: Potential for Alternative Assessment. The Journal of Educational Research, 82(4), 234-245. [ Links ]
21. Pohlmann, J.T. (2004). Use and Interpretation of Factor Analysis in the Journal of Educational Research: 1992-2002. The Journal of Educational Research, 28(1), 14-22. [ Links ]
22. Saris, W. E. & Stronkhorst, H. (1984). Casual modelling in non-experimental research: an introduction to the EISREE approach. Amsterdam: Sociometric Research Foundation. [ Links ]
23. Schreiber, J. B., Nora, A., Stage, F. K., Barlow, E. A. & King, J. (2006). Reporting Structural Equation Modeling and Confirmatory Factor Analysis Results: A Review. The Journal of Educational Research, 22(6), 323-337. [ Links ]
24. Simpson, P. & Siguaw J. (2000). Student evaluations of teaching: An exploratory study of the faculty response. Journal of Marketing Education, 22(3), 199-213. [ Links ]
25. Stafford, T. (1994). Consumption Values and the Choice of Marketing Electives: treating Students like Customers. Journal of Marketing Education, 16(2), 26-33. [ Links ]
26. Steiger, J.H. & Lind, C. (1980). Statistically based tests for the number of common factors. Paper presented at the annual meeting of the Psychometric Society, Iowa City, IA. [ Links ]
Correspondence:
Dra. Esperanza Navarro-Pardo.
Department of Developmental and Educational Psychology.
University of Valencia (Spain).
Google Analytics: UA-76528617-1.
E-mail: esperanza.navarro@uv.es
Article received: 17-04-2016;
revised: 03-05-2016;
accepted: 17-05-2016