SciELO - Scientific Electronic Library Online

 
vol.34 número3Estilos de liderazgo y actitudes en el trabajo: ¿modera la edad su relación?Resistencia al falseamiento de un inventario de personalidad cuasi-ipsativo de elección forzada índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • En proceso de indezaciónCitado por Google
  • No hay articulos similaresSimilares en SciELO
  • En proceso de indezaciónSimilares en Google

Compartir


Revista de Psicología del Trabajo y de las Organizaciones

versión On-line ISSN 2174-0534versión impresa ISSN 1576-5962

Rev. psicol. trab. organ. vol.34 no.3 Madrid dic. 2018

http://dx.doi.org/10.5093/jwop2018a22 

Articles

Employment interview perceptions scale

Escala de evaluación de la percepción sobre las entrevistas de empleo

Pamela Alonso1  , Silvia Moscoso1 

1Universidad de Santiago de Compostela, España

ABSTRACT

The objective of this research was to develop and validate the Employment Interview Perceptions Scale (EIPS). This scale evaluates two dimensions: perception of comfort during the interview and perception of the suitability of the interview for applicant evaluation. Two samples were used. The first one was composed of 803 participants, who evaluated their perceptions in an experimental context. The second sample consisted of 199 interviewees, who evaluated their perceptions in a real evaluation context. All participants evaluated their perceptions for two interview types (Structured Conventional Interview and Structured Behavioral Interview). The analyses confirmed the hypothesized factorial structure. The final version of the EIPS includes 11 items, 6 of them make up the first factor, and 5 make up the second factor. Regarding the reliability of the two factors, high values were reported in the two samples.

Keywords Interview; Applicants’ perceptions; Structured Conventional Interview (SCI); Structured Behavioral Interview (SBI)

RESUMEN

El objetivo de esta investigación era desarrollar y validar la escala de evaluación de la percepción de la entrevista de empleo. Esta escala fue creada para evaluar dos dimensiones: la percepción del confort en la entrevista y la percepción de la idoneidad de la entrevista para la evaluación de los candidatos. Para la validación de la escala se han empleado dos muestras. La primera estaba compuesta por 803 participantes, quienes evaluaron su percepción en un contexto experimental. La otra estaba compuesta por 199 entrevistados, que evaluaron su percepción en un contexto real de evaluación. Todos los participantes evaluaron su percepción de dos tipos de entrevista: la entrevista convencional estructurada y la entrevista conductual estructurada. Los análisis confirmaron la estructura factorial inicial. La versión final de la escala incluye 11 ítems, 6 de ellos componen el primer factor y 5 el segundo. Con respecto a la fiabilidad de ambos factores, se encontraron valores altos en las dos muestras empleadas.

Palabras clave Entrevista; Percepción de los candidatos; Entrevista convencional estructurada (ECO); Entrevista conductual estructurada (ECE)

Introduction

The interest of industrial and organizational psychologists in applicant reactions has been increasing over the last three decades (Aguado, Rico, Rubio, & Fernández, 2016; Anderson, Salgado, & Hülsheger, 2010; Hausknecht, Day, & Thomas, 2004; Nikolaou, Bauer, & Truxillo, 2015; Rodríguez & López-Basterra, 2018; Truxillo, Bauer, & Garcia, 2017). Some evidence has shown that applicant reactions impact on organizational results, which has made organizational research focus its attention on applicants’ side too (Osca, 2007; Rynes, Heneman, & Schwab, 1980; Schuler, 1993). In fact, personnel selection is now understood as a bidirectional process, in which candidates’ opinions also matter (De Wolff & van der Bosch, 1984; Hülsheger & Anderson, 2009). This has led to a significant increase in the number of studies on this issue, especially on the perceptions of different selection tools and distributive and procedural justice in different countries and cultures (Anderson, Born, & Cunningham-Snell, 2001; Anderson et al., 2010; Bertolino & Steiner, 2007; Hausknecht et al., 2004; Moscoso & Salgado, 2004; Steiner & Gilliland, 1996).

Taking into account that the employment interview is the most used selection tool (Alonso, Moscoso, & Cuadrado, 2015), applicants’ perceptions about this instrument can have an important impact on their perceptions of the hiring process. This means that further research on this issue is necessary. The main aim of this research was the development and validation of a scale for learning more about applicants’ perceptions of the employment interview.

Applicants’ Perceptions

Hausknecht et al. (2004) consider that applicants’ perceptions are formed by the set of their opinions about diverse dimensions of organizational justice, their thoughts, and feelings about assessment instruments, and about personnel selection in general. At the same time, applicants’ perceptions establish the basis for their subsequent psychological processes. If their perceptions are positive, this will have a favorable effect on applicants’ reactions. On the contrary, if their perceptions are negative, there is a possibility that the selection process will end up failing since, for example, applicants could react by rejecting the position or taking legal action against the organization.

The possibility that candidates may react negatively to the selection process and its possible consequences is what has caused the interest in this area to increase. In fact, Anderson (2004) and Hausknecht et al. (2004) have highlighted six reasons why applicants’ perceptions should be considered by companies:

  • (1) To avoid the best candidates leaving the process. Rynes (1991) pointed out that applicants form an image of how it could be to work in that company based on the selection methods used during the selection process. This is especially relevant considering that research suggests that the most desirable candidates are those who choose to abandon the process (Ployhart, McFarland, & Ryan, 2002). Therefore, an organization would be failing in its attempt to hire the most qualified candidates if a negative perception caused them to abandon the process.

  • (2) To prevent candidates with a negative image from dissuading others. Applicants with a negative perception will also recommend the organization less to other possible candidates (Ryan, Sacco, McFarland, & Kriska, 2000). The emergence of social networks means that this can have a higher impact, since bad opinions about a company can be disseminated immediately.

  • (3) To prevent negative attitudes which harm an organization’s image. Applicants’ experiences during the selection process can affect their attitudes towards the organization, the image that they project of them and even their consumption of organization’s products and services. Those who have had negative experiences during a selection process could boycott organization’s products and encourage their friends and acquaintances to do so (Smither, Reilly, Millsap, Pearlman, & Stoffey, 1993). As in the previous case, if unsatisfied applicants use social networks to share their opinions, this could have very negative implications for company’s image (Kotler, Kartajaya, & Setiawan, 2010).

  • (4) To avoid selected candidates rejecting the offer. Applicants are less likely to accept an offer when they consider company’s selection procedures to be negative or discriminatory.

  • (5) To prevent applicants’ performance during test taking from being affected by their perceptions. If perceptions affect performance during test taking, they could also affect test validity (Schmit & Ryan, 1992) and hiring decisions (Chan, Schmitt, DeShon, Clause, & Delbridge, 1997).

  • (6) To avoid the possibility of legal claims being initiated. It is very important to consider that applicants who perceive the selection process as unfair and think it is not valid as a predictor of performance could take legal action against the organization (Gilliland, 1993; Smither et al., 1993).

One of the most important contributions to the understanding of applicants’ reactions is Hausknecht et al.’s (2004) meta-analysis. These authors studied the relationship between perceptions of procedure’s characteristics with perceptions of justice and other outcomes, such as offer acceptance intentions, organizational attractiveness, and recommendation intentions. The results showed that perception of procedure characteristics correlates positively with perceptions of procedural justice, distributive justice, motivation during testing, attitudes towards tests and attitudes toward selection process in general. Specifically, face validity and perception of predictive validity correlated positively and moderately with the way procedure fairness was perceived and with the perception of distributive justice, although effect sizes, in this case, were lower. In addition, they found that these characteristics had also a significant impact on attitudes toward testing. Furthermore, moderate relationships between some perceptions of procedure characteristics and offer acceptance intentions and organizational attractiveness were found. Considering these meta-analytic results, we can assume that applicants’ perceptions of selection tools would have a considerable impact on their perceptions of the selection process and on their reactions.

Preferences Regarding Selection Instruments

Applicants’ reactions to selection tools have been the object of a series of studies carried out in many countries. Hausknecht et al. (2004) carried out a meta-analysis including primary studies published up to that date. The results showed that interview was the best-evaluated method, followed by work sample tests, curricula, references, and cognitive abilities tests. Subsequently, Anderson et al. (2010) published a new meta-analytical review with a broader sample of primary studies from 17 countries. In this research, in addition to analyzing job seekers’ general perception of the 10 best-known selection tools, they examined the generalization of results of applicants’ reactions. The results were very similar to those found by Hausknecht et al. (2004), selection interview being the second best-perceived tool after work sample tests. Additionally, they confirmed the generalization of applicants’ reactions.

The results shown in these two meta-analyses referred to the interview as a single tool, even though in practice there are different types of interviews. In fact, applicants could perceive interview differently depending on its content or degree of structure. Therefore, it is crucial to examine whether there are differences in applicants’ perceptions of different types of employment interviews.

Applicants’ Perception of Different Types of Interviews

Three main interview types can be distinguished according to their content and degree of structure: (1) Unstructured Conventional Interview (UCI), in which the interviewer does not follow any script and formulates different questions to different interviewees depending on the course of their conversation (Dipboye, 1992, 1997; Goodale, 1982); (2) Structured Conventional Interview (SCI), in which a script or a series of guidelines about the information that must be obtained from each interviewee are used by the interviewer and; (3) Behavioral Interview (BI), which is the interview type with the highest degree of structure, and includes questions based on applicants’ behaviors. This type of interview can be divided into two sub-types: (a) Structured Situational Interview (SSI), in which the interviewees are asked about how they would perform in a hypothetical situation (Latham, Saari, Pursell, & Campion, 1980), and (b) Structured Behavioural Interview (SBI), which is based on the evaluation of applicants’ past behaviors (Janz, 1982, 1989; Motowidlo et al., 1992; Salgado & Moscoso, 2002, 2014).

Despite the existence of all these alternatives for the interview process, meta-analytical results have only recommended the use of SCIs or BIs for hiring decisions. Specifically, these results have shown better psychometric results, in terms of reliability and criterion validity, for interviews with a higher degree of structure, that is BIs (e.g., Huffcutt & Arthur, 1994; Huffcutt, Culbertson, & Weyhrauch, 2013, 2014; McDaniel, Whetzel, Schmidt, & Maurer, 1994; Salgado & Moscoso, 1995, 2006). Additionally, the conclusions of several primary studies about other important implications of the use of different types of interview (for example, their resistance to bias, the degree to which interviewers feel confident about their decisions, the probability of their producing adverse impact, their economic utility, etc.), also recommended the use of the most structured ones (Alonso, 2011; Alonso & Moscoso, 2017; Alonso, Moscoso, & Salgado, 2017; Rodríguez, 2016; Salgado, 2007).

However, although literature about interview effectiveness for hiring decisions is extensive, there are few studies on applicant reactions to different types of interviews. Rynes, Barber, and Varma (2000) pointed out that new research was necessary on this, considering interview structure and interview content. The main results found in the research conducted on this subject are summarized below.

One of the first studies conducted on this topic was that of Latham and Finnegan (1993). These authors analyzed the perceptions of UCI, SCI, and SSI. They found that the applicants preferred UCI to SSI, because they felt that UCI allowed them to relax, say what they wanted, and that this interview gave them the possibility of influencing its course and especially of showing their motivation. Additionally, Janz and Mooney (1993) analyzed differences between perception of SCI and SBI. The only significant differences found were that SBI was perceived as more complete and exhaustive and that they considered that it had been prepared with a clearer knowledge of the type of qualities required for the position. However, a few years later, Conway and Peneno (1999) found that candidates had more favorable reactions to conventional questions than to situational or behavioral questions.

In addition, some researchers studied applicants’ reactions to BIs, specifically. Day and Carroll (2003) found that applicants perceived BI favorably. Furthermore, Salgado, Gorriti, and Moscoso (2007) analyzed justice reactions to SBI and found that it was perceived as a good tool for promotion decisions and better than other instruments for hiring processes in public administration.

In summary, the results are scarce, so it is not possible to reach a clear conclusion. Therefore, more research is especially relevant considering that some results indicate that interviews with better psychometric properties could be the worst perceived by candidates (Conway & Peneno, 1999; Rynes et al., 2000). In addition, as pointed out by Levashina, Hartwell, Morgeson, and Campion (2014), employment interviews have a double objective: recruitment and selection, so it is of interest to clarify if the most efficient interviews could be failing from the point of view of applicants’ attraction towards the organization.

Therefore, the main objective of this research was to design and validate a tool that would allow for the evaluation of applicants’ perceptions of employment interviews. Having this tool will allow us to continue advancing in the knowledge of applicants’ perceptions of different types of interviews.

Method

Samples

This study has been carried out with two independent samples. The first one was composed of 803 university students of various subjects related to the field of human resources; 65.3% were women and the mean age was 24.66 years (SD = 6.52); 63.8% of the sample participated in the study before having received specific training on personnel assessment. Participants evaluated their perceptions of two types of employment interview after having completed an academic exercise in which they had to evaluate two applicants for a job. Therefore, each participant evaluated their perceptions using the same scale twice.

The second sample was composed of 199 students who were in the final year of their degree; 63.4% were engineering students, 28.6% labor relations students, and 8% students of a master’s degree in Psychology; 52.2% of the sample were women and the mean age was as in the previous sample. These participants evaluated their perceptions of two different types of interviews after being interviewed, that is, in a real context of evaluation. Therefore, they also used the same scale twice.

Measure

Employment Interview Perceptions Scale (EIPS). The first version of the scale was composed of 14 items, which were included to measure two factors: perception of interview comfort and perception of the interview’s suitability for applicants’ evaluation. Items were created taking as a reference other scales used in previous literature, such as Salgado et al. (2007) and Steiner & Gilliland (1996). The Spanish version of the scale was used for this validation study.

Responders had to indicate their degree of agreement with each of the items using a 5-point Likert scale, in which 1 implied totally disagreeing with the evaluated statement and 5 totally agreeing. For a correct interpretation of results, scores were reversed in the following items: “I would be anxious during the interview”, “I would find it difficult to answer the questions”, “It would be difficult for me to prepare the answers in advance”, and “The interview would allow me to fake my responses”. Thus, a higher score on an item indicates a more favorable perception about the interview.

Experiment Preparation

Video recordings. For the study carried out with the first sample, the same recordings as in Alonso and Moscoso (2017) were used. That is, the videos showed the interviews of two applicants: one qualified and the other unqualified for the position of human resources technician. Each candidate was interviewed twice, once with an SCI, and the other one with an SBI. In addition, the roles of both applicants were played by an actor and an actress, which meant that 8 videos were recorded. The questions asked by the interviewers in both interviews aimed to evaluate the candidates in the dimensions of organization and planning, teamwork, problem-solving, and global assessment. However, the type of questions asked differed according to the nature of each interview.

Interview scripts. Two interview scripts were used with the second sample. One for an SCI and another for an SBI. For each interview, the interviewers had the same script used in the videos of the first sample. Therefore, the questions that were included in both interviews also aimed to evaluate the interviewees on the dimensions of organization and planning, teamwork, problem-solving, and global assessment.

Procedure

With the first sample, participants’ perception of the interview was evaluated in an experimental context. As part of an academic exercise, the students had to evaluate two applicants for the position of human resources technician. The appraisal had to be made after watching the video of an applicant’s interview. These raters had seen, at least, one SCI and one SBI, and the order in which they watched the interviews was alternated between the different groups of participants.

The perceptions assessment was made when the raters had evaluated the interviewee once the video of each interview had finished. For this, the participants were instructed to put themselves in an applicant’s position, so they had to think that they were also part of the same selection process, and that they would be evaluated with the same interview as in the video, which meant that the interviewer would ask them the same questions. These instructions were complemented with the indications that appeared printed in the questionnaire: “Suppose that you have presented yourself as an applicant for this selection process and you have been evaluated with the same interview you have just watched. Please, indicate your degree of agreement with the following statements.”

In the case of the second sample, the interviewees’ perceptions of the two types of interviews were evaluated in a real evaluation context, that is, when they had just been interviewed. The students were participating in a Development Assessment Center (DAC) consisting of various typical tests of a personal selection process. The purpose for which the DAC was carried out was to facilitate the future incorporation of students into the labor market. The DAC allowed them to gain experience with different selection tools, to know how their candidatures could be perceived in a real evaluation process, and to help them, with the advice of some experts’, to improve their results in future selection processes.

All the participants were interviewed twice. Therefore, a total of 398 interviews were carried out, of which 199 were SCIs and 199 SBIs. The team of interviewers consisted of six researchers, specialized in personnel selection and experienced in conducting interviews. Each participant was interviewed by two different interviewers. At the end of each interview, the interviewers gave the Employment Interview Perceptions Scale to the participant who had to fill it out at that time. Apart from interviewers’ instructions, which indicated the same thing as to the participants of the other sample, the questionnaire included the following indications: “Suppose that you are an applicant in a selection process, and the only test with which you will be evaluated is an interview like the one you have just done. Please, indicate your degree of agreement with the following statements.”

Results

A principal components analysis with Promax oblique rotation was carried out to verify the factorial structure of the Employment Interview Perceptions Scale (EIPS). Given that the main objective of this research was factorial analysis, all the analyses were carried out considering the two evaluations made by each participant (one for the SBI and one for the SCI) as independent evaluations. FACTOR Program (Lorenzo-Seva & Ferrando, 2018) was used for all the analyses. Following the advice of this software, polychoric correlations were used since some items showed asymmetric distributions and an excess of kurtosis.

Table 1 shows the correlation coefficients between the fourteen items that composed the first version of the scale. Correlations for the experimental sample are presented below the diagonal, and correlations for the interviewees are presented above the diagonal. Correlations between most of the items are very similar in the two samples.

Table 1 Correlation Matrix between the Items of the Employment Interview Perceptions Scale 

Note. Correlations for the experimental context sample (n = 1,595) are presented below the diagonal, and correlations for the sample of the interviewees (n = 396) are presented above the diagonal. 1The item has been reversed.

Looking for the best possible factorial solution, several analyses were carried out considering the elimination of some items. In the first analysis all the items were included, and some of the items were removed in the other analyses. The results found in these analyses are shown in Table 2. Factor analyses carried out confirm the existence of the two theoretical factors, on which the design of the scale had been based. To verify that the factorial structure was repeated in the two samples, a comparison of the factorial structures resulting was carried out using the Burt’s (1948) and Tucker’s (1951) factor congruence coefficient. Congruence coefficients (CC) between pairs of parallel factors corresponding to the sample formed by the participants who made their evaluation in an experimental context (sample a) and the sample of those who had just been interviewed (sample b) were the following: CCF1a_F1b = .95, for one of the factors, and CCF2a-F2b = .96, for the other. According to Cattell’s criterion (1978), coefficients of congruence equal to or greater than .93 are considered significant. Therefore, we can confirm that factorial structure was replicated in the two samples. Additionally, the reliability coefficients reported for factor 1 and factor 2 in the experimental sample were .842 and .946, respectively, and .854 and .905 in the other sample.

Table 2 Factor Loading for Principal Components Analysis with Promax Rotation of the Employment Interview Perceptions Scale in the Two Samples, Reliability Coefficients, Inter-factor Correlations, and Burt & Tucker Congruence Coefficients (version without item 3) 

Note. Factor loadings > .400 are in boldface. α = Internal consistency coefficient; CCBT F1a-F1b = congruency coefficient between the first factor in the two samples; CCBT F1a-F2b = congruency coefficient between the first factor in the first sample and the second factor in the second sample; CCBT F2a-F1b = congruency coefficient between the second factor in the first sample and the first factor in the second sample; CCBT F2a-F2b = congruency coefficient between the second factor in the two samples. 1The item has been reversed.

Confirmatory factorial analyses separately for each factor were carried out to confirm an optimal scale. The factor of perception of interview comfort was initially composed of 7 items: (1) “I would be satisfied with the interview”, (2) “I would be motivated during the interview”, (3) “I would be anxious during the interview”, (4) “I would find it difficult to answer the questions”, (5) “I would feel comfortable with the interview questions”, (6) “The interview would respect my privacy”, and (7) “It would be difficult for me to prepare the answers in advance”, but the first factorial analysis suggested discarding item 3.

Additionally, regarding the factorial loadings that Table 2 reported for the items “I would be satisfied with the interview” and “I would be motivated during the interview”, we believe that these items were interpreted in a different way depending on the situation. In fact, only in the real evaluation situation do the items load higher in this factor than in the other. However, in the experimental sample, when the participants had been raters before, they interpreted these items according to the suitability of the interview. This is totally understandable, since the situation in which they made their evaluations determined that their perceptions of their satisfaction with the interview and their motivation during the interview were conditioned by how they perceived its suitability. Given this result, we consider it appropriate to modify the wording of these two items in the following way: “I would be satisfied with my results in the interview” and “I would be motivated during the interview to get the best possible results”.

Finally, the results of the confirmatory factorial analyses carried out only with the items that composed this factor, including and excluding the item “It would be difficult for me to prepare the answers in advance” are reported in Table 3. This table shows factorial loadings, proportions of explained variance, and several reliability coefficients (Cronbach’s alpha, greatest lower bound, McDonald’s omega, and standardized Cronbach’s alpha). Even considering the problems that the first two items presented in the experimental sample, the table shows favorable results from the point of view of the factorial loadings and the reliability. However, a scale excluding item 7 was confirmed to be more efficient, since the proportions of the explained variance and the reliability coefficients are higher using fewer items.

Table 3 Factor Loadings in the Confirmatory Factor Analysis for the Factor of Perception of Interview Comfort, the Proportion of the Explained Variance, and Reliability Coefficients, in the Two Samples, Including and Excluding Item 7 

Note.α = internal consistency coefficient. 1The item has been reversed.

Concerning the factor of perception of the suitability of the interview for candidate evaluation, this was initially composed of 7 items: (1) “The interview would allow me to fake my responses”, (2) “The interview would seem fair”, (3) “The interview would allow me to be evaluated objectively”, (4) “The candidates who receive the best evaluations would perform better”, (5) “The interview is adequate for deciding who the best candidate is”, (6) “The interview facilitates the decision making of the interviewers”, and (7) “The interview results are the same for men and women”. However, Table 2 results suggested that this last item could be deleted from the scale. The results of the confirmatory factorial analyses carried out only with the items that composed this factor, including and excluding item 7, are reported in Table 4. Data recommended the use of the scale excluding this item, since the proportion of the explained variance and the different reliability coefficients reported were higher.

Table 4 Factor Loadings in the Confirmatory Factor Analysis for the Factor of Perception of the Suitability of the Interview, the Proportion of the Explained Variance, and Reliability Coefficients, in the Two Samples, Including and Excluding Item 7 

Note.α = internal consistency coefficient. 1The item has been reversed.

In conclusion, the final version of the EIPS is composed of eleven items. The perception of interview comfort factor includes five items and the perception of the suitability of the interview for applicant evaluation factor includes six. The Spanish and English versions of the final scale are reported in Appendixes A and B, respectively.

Discussion

The main objective of this research was to develop and validate a scale that would allow for the evaluation of applicants’ perceptions of different types of employment interviews. Specifically, in addition to designing the content, it was intended to verify factorial structure and psychometric properties of the scale. The results confirm that the scale evaluates the two factors based on which it was designed, that is, perception of interview comfort and perception of the suitability of the interview. In addition, favorable results in terms of reliability were reported for both factors.

A tool like the EIPS was necessary to study if there are differences between applicants’ perceptions of different types of interviews that can be used for personnel selection. Although studies carried out so far on candidates’ perceptions have shown very favorable results for this instrument, most of them have evaluated interview’s perceptions as a single instrument, when in practice there are different types of interviews with substantial differences among them. So, this is a field that needs to be studied more deeply. However, it should be remembered that the scale proposed in this research focuses only on the evaluation of applicants’ perceptions related to interview’s content and structure.

The perceptions that applicants may have about the interview depend on several issues related not only to these two characteristics, but also to other variables, such as information provided to an interviewee before the interview, the impression that a candidate has of the interviewer, the interviewer’s warmth during the interview, feedback received, expectations based on the influence of peer communication, etc. (Geenen, Proost, Schreurs, van Dam, & von Grumbkow, 2013; Harris & Fink, 1987; Nikolaou & Georgiou, 2018; Rynes, 1991; Rynes et al., 2000). Although these are issues that also have a relevant role in applicants’ reactions (Hausknecht et al., 2004), they do not depend exclusively on the type of interview used, but will rather vary depending on the organization in which personnel selection is carried out or the identity of people in charge. Therefore, the evaluation of these aspects was not part of the objectives with which the EIPS was designed.

Implications for Practice and Future Research

EIPS will allow for the evaluation of perceptions that applicants have of interviews, so that we can continue advancing in the scientific knowledge on this issue. However, the present research only validates the Spanish version of EIPS. New studies using the English version are needed to confirm the same results. Additionally, the realization of primary studies using this scale, in any version, will allow us to know how different interviews are perceived and to compare the results between different interview formats. Also, these primary results could be integrated into future meta-analyses that would allow us to gain a more precise knowledge of this question.

In addition, advances in the knowledge of applicant interview perceptions will contribute to the improvement of professional practice. The fact that the interviewer knows in advance possible applicant reactions to the interview that they will perform, will allow them to act accordingly. An example of this could be, during the opening phase of the interview, explaining to an interviewee the reason for the type of questions that you will be asking in order to carry out your evaluation, which could improve an applicant’s perception of interview’s suitability. Another example could be trying to reduce an interviewee’s anxiety in those kinds of interview that could be perceived as more difficult.

Another contribution of this scale is the fact that it can be adapted to evaluate candidates’ perceptions of other selection tools. This would allow us to study specifically how comfort during other types of tests and their suitability for applicants’ assessment are perceived. In any case, it would be necessary to validate this adaptation of the scale, with the objective of confirming that, indeed, it meets the psychometric criteria necessary to be used for research purposes.

In conclusion, this scale can contribute to a better understanding of applicants’ perceptions of the most important tool in personnel selection. We hope that the results of future research carried out using the EIPS will promote an improvement in personnel selection practice, in the interests of both practitioners and applicants.

Cite this article as: Alonso, P. & Moscoso, S. (2018). Employment interview perceptions scale. Journal of Work and Organizational Psychology, 34, 203-212. https://doi.org/10.5093/jwop2018a22

Funding: The research reported in this manuscript was supported by Grant PSI2014-56615-P from the Spanish Ministry of Economics and Competitiveness and Grant 2016 GPC GI-1458 from the Consellería de Cultura, Educación e Ordenación Universitaria, Xunta de Galicia.

References

Aguado, D., Rico, R., Rubio, V. J., & Fernández, L. (2016). Applicant reactions to social network web use in personnel selection and assessment. Journal of Work and Organizational Psychology, 32, 183-90. https://doi.org/10.1016/j.rpto.2016.09.001 [ Links ]

Alonso, P. (2011). ¿Producen resultado adverso de género las entrevistas estructuradas de selección de personal? Journal of Work and Organizational Psychology, 27, 43-53. https://doi.org/10.5093/tr2011v27n1a5 [ Links ]

Alonso, P., & Moscoso, S. (2017). Structured behavioral and conventional interviews: Differences and biases in interviewer ratings. Journal of Work and Organizational Psychology, 33, 183-191. https://doi.org/10.1016/j.rpto.2017.07.003 [ Links ]

Alonso, P., Moscoso, S., & Cuadrado, D. (2015). Procedimientos de selección de personal en pequeñas y medianas empresas españolas. Journal of Work and Organizational Psychology, 31, 79-89. https://doi.org/10.1016/j.rpto.2015.04.002 [ Links ]

Alonso, P., Moscoso, S., & Salgado, J. F. (2017). Structured behavioral interview as a legal guarantee for ensuring equal employment opportunities for women: A meta-analysis. The European Journal of Psychology Applied to Legal Context, 9, 15-23. https://doi.org/10.1016/j.ejpal.2016.03.002 [ Links ]

Anderson, N. (2004). The dark side of the moon: Applicant perspectives, negative psychological effects (NPEs), and candidate decision making in selection. International Journal of Selection and Assessment, 12, 1-8. https://doi.org/10.1111/j.0965-075X.2004.00259.x [ Links ]

Anderson, N., Born, M., & Cunningham-Snell, N. (2001). Recruitment and selection: Applicant perspectives and outcomes. In N. Anderson, D. S. Ones, H. K. Sinangil, & C. Viswesvaran (Eds.), Handbook of Industrial, Work & Organizational Psychology. Vol. 1: Personnel psychology (pp. 200-218). London, UK: Sage. [ Links ]

Anderson, N., Salgado, J. F., & Hülsheger, U. R. (2010). Applicant reactions in selection: Comprehensive meta-analysis into reaction generalization versus situational specificity. International Journal of Selection and Assessment, 18, 291-304. https://doi.org/10.1111/j.1468-2389.2010.00512.x [ Links ]

Bertolino, M., & Steiner, D. D. (2007). Fairness reactions to selection methods: An Italian study. International Journal of Selection and Assessment, 15, 197-205. https://doi.org/10.1111/j.1468-2389.2007.00381.x [ Links ]

Burt, C. (1948). The factorial study of temperament traits. British Journal of Statistical Psychology, 1, 178-203. https://doi.org/10.1111/j.2044-8317.1948.tb00236.x [ Links ]

Cattell, R. B. (1978) The identification and interpretation of factors. In R. B. Cattell, The scientific use of factor analysis in behavioral and life sciences (pp. 229-270). https://doi.org/10.1007/978-1-4684-2262-7_10 [ Links ]

Chan, D., Schmitt, N., DeShon, R. P., Clause, C. S., & Delbridge, K. (1997). Reactions to cognitive ability tests: The relationships between race, test performance, face validity perceptions, and test-taking motivation. Journal of Applied Psychology, 82, 300-310. https://doi.org/10.1037/0021-9010.82.2.300 [ Links ]

Conway, J. M., & Peneno, G. M. (1999). Comparing structured interview question types: Construct validity and applicant reactions. Journal of Business and Psychology, 13, 485-506. https://doi.org/10.1023/A:1022914803347 [ Links ]

Day, A. L., & Carroll, S. A. (2003). Situational and patterned behavior description interviews: A comparison of their validity, correlates, and perceived fairness. Human Performance, 16, 25-47. https://doi.org/10.1207/S15327043HUP1601_2 [ Links ]

De Wolff, C., & van der Bosch, G. (1984). Personnel Selection. En P. D. Drenth, H. Thierry, P. J. Willems, & C. de Wolff (Eds.), Handbook of work and organizational psychology. Chichester, UK: Wiley. [ Links ]

Dipboye, R. L. (1992). Selection interviews: Process perspectives. Cincinnati, OH: South-Western Pub. [ Links ]

Dipboye, R. L. (1997). Structured selection interviews: Why do they work? Why are they underutilized? In N. Anderson & P. Herriott (Eds.), International handbook of selection and assessment (pp. 455-474). London, UK: J Wiley. [ Links ]

Geenen, B., Proost, K., Schreurs, B., van Dam, K., & von Grumbkow, J. (2013). What friends tell you about justice: The influence of peer communication on applicant reactions. Journal of Work and Organizational Psychology, 29, 37-44. https://doi.org/10.5093/tr2013a6 [ Links ]

Gilliland, S. W. (1993). The perceived fairness of selection systems: An organizational justice perspective. Academy of Management Review, 18, 694-734. https://doi.org/10.5465/AMR.1993.9402210155 [ Links ]

Goodale, J. G. (1982). The fine art of interviewing. Prentice Hall. [ Links ]

Harris, M. M., & Fink, L. S. (1987). A field study of applicant reactions to employment opportunities: Does the recruiter make a difference? Personnel Psychology, 40, 765-784. https://doi.org/10.1111/j.1744-6570.1987.tb00623.x [ Links ]

Hausknecht, J. P., Day, D. V., & Thomas, S. C. (2004). Applicant reactions to selection procedures: An updated model and meta-analysis. Personnel Psychology, 57, 639-683. https://doi.org/10.1111/j.1744-6570.2004.00003.x [ Links ]

Huffcutt, A. I., & Arthur, W. (1994). Hunter and Hunter (1984) revisited: Interview validity for entry-level jobs. Journal of Applied Psychology, 79, 184-190. https://doi.org/10.1037/0021-9010.79.2.184 [ Links ]

Huffcutt, A. I., Culbertson, S. S., & Weyhrauch, W. S. (2013). Employment interview reliability: New meta-analytic estimates by structure and format. International Journal of Selection and Assessment, 21, 264-276. https://doi.org/10.1111/ijsa.12036 [ Links ]

Huffcutt, A. I., Culbertson, S. S., & Weyhrauch, W. S. (2014). Moving forward indirectly: Reanalyzing the validity of employment interviews with indirect range restriction methodology. International Journal of Selection and Assessment, 22, 297-309. https://doi.org/10.1111/ijsa.12078 [ Links ]

Hülsheger, U. R., & Anderson, N. (2009). Applicant perspectives in selection: Going beyond preference reactions. International Journal of Selection and Assessment, 17, 335-345. https://doi.org/10.1111/j.1468-2389.2009.00477.x [ Links ]

Janz, T. (1982). Initial comparisons of patterned behavior description interviews versus unstructured interviews. Journal of Applied Psychology, 67, 577-580. https://doi.org/10.1037/0021-9010.67.5.577 [ Links ]

Janz, T. (1989). The patterned behavior description interview: The best prophet of the future is the past. In R. W. Eder & G. R, Ferris (Eds.), The employment interview: Theory, research, and practice (pp. 158-168). Thousand Oaks, CA: Sage Publications, Inc. [ Links ]

Janz, T., & Mooney, G. (1993). Interviewer and candidate reactions to patterned behaviour description interviews. International Journal of Selection and Assessment, 1, 165-169. https://doi.org/10.1111/j.1468-2389.1993.tb00106.x [ Links ]

Kotler, P., Kartajaya, H., & Setiawan, I. (2010). Marketing 3.0. from products to customers to the human spirit. Hoboken, NJ: John Wiley & Sons, Inc. [ Links ]

Latham, G. P., & Finnegan, B. J. (1993). Perceived practicality of unstructured, patterned, and situational interviews. In H. Schuler, J. L. Farr, & J. M. Smith (Eds.), Personnel selection and assessment: Individual and organizational perspectives, (pp. 41-55). Hillsdale, NJ: Erlbaum. [ Links ]

Latham, G. P., Saari, L. M., Pursell, E. D., & Campion, M. A. (1980). The situational interview. Journal of Applied Psychology, 65, 422-427. https://doi.org/10.1037/0021-9010.65.4.422 [ Links ]

Levashina, J., Hartwell, C. J., Morgeson, F. P., & Campion, M. A. (2014). The structured employment interview: Narrative and quantitative review of the research literature. Personnel Psychology, 67, 241-293. https://doi.org/10.1111/peps.12052 [ Links ]

Lorenzo-Seva, U., & Ferrando, P. J. (2018). FACTOR (10.8.02) [Computer Software] Tarragona, Spain: Rovira i Virgili University. Retrieved from http://psico.fcep.urv.es/utilitats/factor/Links ]

McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Maurer, S. D. (1994). The validity of employment interviews: A comprehensive review and meta-analysis. Journal of Applied Psychology, 79, 599-616. https://doi.org/10.1037/0021-9010.79.4.599 [ Links ]

Moscoso, S., & Salgado, J. F. (2004). Fairness reactions to personnel selection techniques in Spain and Portugal. International Journal of Selection and Assessment, 12, 187-196. https://doi.org/10.1111/j.1468-2389.2008.00404.x [ Links ]

Motowidlo, S. J., Carter, G. W., Dunnette, M. D., Tippins, N., Werner, S., Burnett, J. R., & Vaughan, M. J. (1992). Studies of the structured behavioral interview. Journal of Applied Psychology, 77, 571-587. https://doi.org/10.1037/0021-9010.77.5.571 [ Links ]

Nikolaou, I., Bauer, T. N., & Truxillo, D. M. (2015). Applicant reactions to selection methods: An overview of recent research and suggestions for the future. In I. Nikolaou & J. K. Oostrom (Eds.), Employee recruitment, selection, and assessment (pp. 80-96). New York, NY: Psychology Press. [ Links ]

Nikolaou, I., & Georgiou, K. (2018). Fairness reactions to the employment interview. Journal of Work and Organizational Psychology, 34, 103-111. https://doi.org/10.5093/jwop2018a13 [ Links ]

Osca, A. (2007). La perspectiva del candidato en los procesos de selección. In A. Osca (Ed.), Selección, evaluación y desarrollo de los recursos humanos (pp. 273-303). Madrid, Spain: Sanz y Torres, S.L. [ Links ]

Ployhart, R. E., McFarland, L. A., & Ryan, A. M. (2002). Examining applicants’ attributions for withdrawal from a selection procedure. Journal of Applied Social Psychology, 32, 2228-2252. https://doi.org/10.1111/j.1559-1816.2002.tb01861.x [ Links ]

Rodríguez, A. (2016). Predictive validity and adverse impact of the structured behavioral interview in the public sector. Journal of Work and Organizational Psychology, 32, 75-85. https://doi.org/10.1016/j.rpto.2016.04.003 [ Links ]

Rodríguez, A., & López-Basterra, J. (2018). Selection predictors in the public sector: Predictive validity and candidate reactions. Journal of Work and Organizational Psychology, 34, 16-28. https://doi.org/10.5093/jwop2018a3 [ Links ]

Ryan, A. M., Sacco, J. M., McFarland, L. A., & Kriska, S. D. (2000). Applicant self-selection: Correlates of withdrawal from a multiple hurdle process. Journal of Applied Psychology, 85, 163-179. https://doi.org/10.1037/0021-9010.85.2.163 [ Links ]

Rynes, S. L. (1991). Recruitment, job choice, and post-hire consequences: A call for new research directions. In M. D. Dunnette & L. M. Hough (Eds.), Handbook of industrial and organizational psychology. Vol. 2 (2ª Edición, pp. 399-444). Palo Alto, CA: Consulting Psychologists Press. [ Links ]

Rynes, S. L., Barber, A. E., & Varma, G. H. (2000). Research on the rmployment interview: Usefulness for practice and recommendations for future research. In C. L. Cooper (Ed.), Industrial and organizational psychology: Linking theory with practice. Oxford, UK: Blackwell Publishers [ Links ]

Rynes, S. L., Heneman, H. G., & Schwab, D. P. (1980). Individual reactions to organizational recruiting: A review. Personnel Psychology, 33, 529-542. https://doi.org/10.1111/j.1744-6570.1980.tb00481.x [ Links ]

Salgado, J. F. (2007). Análisis de utilidad económica de la entrevista conductual estructurada en la selección de personal de la administración general del País Vasco. Revista de Psicología del Trabajo y de las Organizaciones, 23, 139-154. [ Links ]

Salgado, J. F., Gorriti, M., & Moscoso, S. (2007). La entrevista conductual estructurada y el desempeño laboral en la Administración pública española: propiedades psicométricas y reacciones de justicia. Revista de Psicología del Trabajo y de las Organizaciones, 23, 39-55. [ Links ]

Salgado, J. F., & Moscoso, S. (1995). Validez de las entrevistas conductuales estructuradas. Revista de Psicología del Trabajo y las Organizaciones, 11, 9-24. [ Links ]

Salgado, J. F., & Moscoso, S. (2002). Comprehensive meta-analysis of the construct validity of the employment interview. European Journal of Work and Organizational Psychology, 11, 299-324. https://doi.org/10.1080/13594320244000184 [ Links ]

Salgado, J. F., & Moscoso, S. (2006). Utiliser les entretiens comportementaux structurés pour la sélection du personnel? In C. Lévy-Leboyer, C. Louche, & J. P. Rolland (Eds.), RH: Les apports de la psychologie du travail. Vol 1 (pp. 195-207). Paris, France: Éditions d´Organisation. [ Links ]

Salgado, J. F., & Moscoso, S. (2014). La entrevista conductual estructurada de selección de personal: teoría, práctica y rentabilidad (4ª ed.) Madrid, Spain: Pirámide. [ Links ]

Schmit, M. J., & Ryan, A. M. (1992). Test-taking dispositions: A missing link? Journal of Applied Psychology, 77, 629-637. https://doi.org/10.1037/0021-9010.77.5.629 [ Links ]

Schuler, H. (1993). Social validity of selection situations: A concept and some empirical results. In H. Schuler, J. L. Farr, & M. Smith (Eds.), Personnel selection and assessment. individual and organizational perspectives (pp. 11-26). Hillsdale, NJ: Lawrence Erlbaum Associates Publishers. [ Links ]

Smither, J. W., Reilly, R. R., Millsap, R. E., Pearlman, K., & Stoffey, R. W. (1993). Applicant reactions to selection procedures. Personnel Psychology, 46, 49-76. https://doi.org/10.1111/j.1744-6570.1993.tb00867.x [ Links ]

Steiner, D. D., & Gilliland, S. W. (1996). Fairness reactions to personnel selection techniques in France and the United States. Journal of Applied Psychology, 81, 134-141. https://doi.org/10.1111/j.0965-075X.2004.00265.x [ Links ]

Truxillo, D. M., Bauer, T. N., & Garcia, A. M. (2017). Applicant reactions to hiring procedures. In H. W. Goldstein, E. D. Pulakos, J. Passmore, & C. Semedo (Eds.), The Wiley Blackwell handbook of the psychology of recruitment, selection and employee retention (pp. 53-70). https://doi.org/10.1002/9781118972472.ch4 [ Links ]

Tucker, L. R. (1951). A Method for synthesis of factor analysis studies (PRS Report No. 984). Retrieved from http://www.dtic.mil/dtic/tr/fulltext/u2/047524.pdfLinks ]

Received: May 28, 2018; Accepted: September 12, 2018

Conflict of Interest

The authors of this article declare no conflict of interest.

Correspondence: pamela.alonso@usc.es (P. Alonso).

Creative Commons License This is an Open Access article distributed under the terms of the Creative Commons Attribution-Noncommercial No Derivative License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium provided the original work is properly cited and the work is not changed in any way.