SciELO - Scientific Electronic Library Online

 
vol.37 número1No todo teletrabajo es valioso índice de autoresíndice de assuntospesquisa de artigos
Home Pagelista alfabética de periódicos  

Serviços Personalizados

Journal

Artigo

Indicadores

Links relacionados

  • Em processo de indexaçãoCitado por Google
  • Não possue artigos similaresSimilares em SciELO
  • Em processo de indexaçãoSimilares em Google

Compartilhar


Revista de Psicología del Trabajo y de las Organizaciones

versão On-line ISSN 2174-0534versão impressa ISSN 1576-5962

Rev. psicol. trab. organ. vol.37 no.1 Madrid Abr. 2021  Epub 29-Mar-2021

https://dx.doi.org/10.5093/jwop2021a7 

ARTICLES

Faking effects on the factor structure of a quasi-ipsative forced-choice personality inventory

Los efectos del faking en la estructura factorial de un inventario de personalidad de elección forzosa quasi-ipsativo

Alexandra Martínez1  , Silvia Moscoso1  , Mario Lado1 

1University of Santiago de Compostela, Spain

ABSTRACT

Research has shown that faking behavior affects the factor structure of single-stimulus (SS) personality measures. However, no published research has analyzed the effects of this phenomenon on the factor structure of forced-choice (FC) personality inventories. This study examines the effects of faking, induced in a laboratory setting, on the construct validity of a quasi-ipsative FC personality inventory based on the Five-Factor Model. It also examines the moderator effect of the type of experimental design (between-subject and within-subject design) on factor analyses. The results showed that (a) data fit to a structure of five-factors in the two conditions (honest and faking) in both experimental designs; (b) model fit indices are also good or excellent in all cases; and (c) Burt-Tucker’s congruence coefficients between convergent factors of conditions analyzed are very high. These findings provide evidence that the quasi-ipsative FC format is a robust instrument that controls the effects of faking on factor structure. Finally, we discuss theoretical and practical implications of these findings for personnel selection and assessment.

Keywords Faking; Five-factor model; Quasi-ipsative forced-choice; inventories; Construct validity; Measurement equivalence/ invariance

RESUMEN

La investigación ha demostrado que el faking o falseamiento afecta a la estructura factorial de las medidas de personalidad de estímulo único (single stimulus; SS). Sin embargo hasta la fecha no se ha publicado ninguna investigación que haya analizado los efectos de este fenómeno en la estructura factorial de los cuestionarios de personalidad de elección forzosa (forced-choice, FC). Este estudio examina los efectos del falseamiento, inducido en un entorno de laboratorio, en la validez de constructo de un cuestionario de personalidad de elección forzosa quasi-ipsativo basado en el modelo de los cinco grandes factores de personalidad. También examina el efecto moderador del tipo de diseño experimental (inter-sujeto e intra-sujeto) en los análisis factoriales. Los resultados mostraron que (a) los datos se ajustan a una estructura de cinco factores en las dos condiciones (honesta y faking) en ambos diseños experimentales, (b) los índices de ajuste del modelo son buenos o excelentes en todos los casos y (c) los coeficientes de congruencia de Burt-Tucker entre los factores convergentes de las condiciones analizadas son muy altos. Estos hallazgos proporcionan evidencia de que el formato de elección forzosa quasi-ipsativo es un instrumento robusto que controla los efectos del faking en la estructura factorial. Finalmente se analizan las implicaciones teóricas y prácticas de estos resultados en la selección y evaluación de personal.

Palabras clave Falseamiento; Cuestionario; Modelo de los cinco grandes; Cuestionarios de elección forzosa quasi-ipsativos; Validez de constructo; Equivalencia/invarianza de la medida

Introduction

Personality inventories are an assessment procedure widely used in organizational settings, especially for high-stakes selection decisions. (e.g., Alonso et al., 2015; García-Izquierdo et al., 2019; García-Izquierdo et al., 2020; Golubovich et al., 2020; Heller, 2005; Rothstein & Goffin, 2006; Sackett, 2017). Empirical evidence has shown that personality instruments based on the Five-Factor model are valid predictors of relevant organizational and academic criteria, such as overall job performance, job satisfaction, leadership, and counterproductive behavior, among others (see for instance, Barrick et al., 2001; Bartram, 2005; Cuadrado et al., 2020, 2021; Delgado-Rodríguez, 2018; Judge et al., 2013; Lado & Alonso, 2017; Poropat, 2009; Salgado, 1997, 2003; Salgado et al., 2015; Salgado et al., 2013; Salgado et al., 2020). Conscientiousness and emotional stability predicted all the criteria analyzed, while the other three factors (extraversion, openness to experience, and agreeableness) predicted some specific criteria in occupational categories. For instance, extraversion is a predictor of performance in jobs that require social interaction, agreeableness is a performance predictor for occupations oriented to cooperation and to help others, and openness to experience is a relevant predictor of performance for jobs requiring, for example, high levels of creativity.

Typically, personality has been assessed with single-stimulus (SS) questionnaires. SS instruments are characterized by the fact that individuals must rate every item of the test indicating their level of agreement with it (through Likert scales, true/false, or yes/no) in order to describe their personality. However, due to the characteristics of this type of answer, SS questionnaires have frequently been criticised for their potential susceptibility to faking behavior (e.g., Christiansen et al., 2005; Griffith & McDaniel, 2006; McFarland & Ryan, 2000; Rosse et al., 1998; Viswesvaran & Ones, 1999). Meta-analytic research of Viswesvaran and Ones (1999), Birkeland et al. (2006), and Salgado (2016) pointed out that individuals can distort their scores on SS instruments if they are motivated to fake.

In order to reduce or control the faking effects on personality assessment, forced choice (FC) personality inventories have been suggested as an alternative. Meta-analytic evidence revealed that when the Big Five personality dimensions are assessed with one specific class of FC personality questionnaires, i.e., quaisi-ipsative FC personality questionnaires, they showed larger validity coefficients for predicting job performance than did SS personality inventories (e.g., Salgado, 2017; Salgado & Tauriz, 2014; Salgado et al., 2015). Recent meta-analytic studies have also shown that quasi-ipsative FC personality questionnaires are robust against faking effects (Cao & Drasgow, 2019; Martinez, 2019; Salgado & Lado, 2018).

However, some issues concerning quasi-ipsative FC personality inventories remain unexamined, for instance, whether the factor structure (construct validity) of FC personality inventories is resistant to the effects of faking. This article reports a study that examines the effects of faking on the factor structure of a quasi-ipsative FC personality inventory. To the best of our knowledge, no previous research has been undertaken to analyze this issue. Specifically, this study examines whether the quasi-ipsative FC inventory presents measurement invariance between honest and faking response-conditions. In the second place, the study examines whether the type of experimental design (within-subject and between-subject design) is a moderator of the effects of faking on the construct validity of quasi-ipsative FC.

Faking Behavior

McFarland and Ryan (2000) defined faking as “a type of response bias where an individual consciously distorts answers to be viewed favorably” (p. 813). Along the same lines, Paulhus (2002) described this behavior as the intention to show an “overly positive self-description” (p. 50) and, more recently, Ziegler et al. (2012) provided a more comprehensive definition of faking, describing this behavior as “a set of responses aimed at providing a portrayal of the self that helps a person to achieve personal goals. Faking occurs when this response set is activated by situational demands and personal characteristics to produce systematic differences in test scores that are not due to the attribute of interest” (p. 8). Hence, from these definitions, faking can be understood as intentional behavior to distort responses, by choosing the most socially desirable answer, to offer an image that favors the individual in the evaluation process.

Due to the adverse characteristics of this behavior, there is considerable concern to understand the potential negative effects that faking could have on SS personality questionnaires (see, for instance, Birkeland et al, 2006; Salgado, 2016; Viswesvaran & Ones, 1999). Specifically, research has focused on the effects of faking behavior on (a) the scores, (b) the reliability, (c) the validity, and (d) the ranking of candidates (hiring decisions).

The effects on the mean of scores are among the most studied. The meta-analyses of Viswesvaran and Ones (1999), Birkeland et al. (2006), Hooper (2007), and Salgado (2016) have shown that faking causes an increase in the scores of personality measures and that this effect is greatest for conscientiousness and emotional stability in all cases. Likewise, meta-analytic evidence showed that faking also reduces the magnitudes of standard deviations of scores obtained with SS questionnaires (Salgado, 2016; Viswesvaran & Ones, 1999). These two simultaneous effects are caused by an artificial homogenization of samples, producing on average a reduction of the range of scores obtained by the individuals. Therefore, it would be more difficult to differentiate people’s suitability to be hired (Salgado, 2016).

In addition, empirical evidence has shown that reliability of SS questionnaires could be also affected by faking. The findings on this issue have revealed that reliability decreases significantly, that is, the degree of error contained in the measure increases when faking occurs (see, for instance, Salgado, 2016; Stark et al., 2012; Van Iddekinge et al., 2005). Consequently, the scores obtained with SS measures could be less reliable if faking is affecting data because measurement standard error is larger.

Regarding criterion validity, findings of the recent meta-analysis by Salgado (2016) has shown that faking causes attenuation of criterion validity in SS questionnaires. When individuals commit faking, a more imperfect measurement of the personality variables is obtained, which necessarily implies a reduction in the predictive power of these instruments.

Finally, in relation to construct validity, the effects found were less consistent and were examined for SS questionnaires only. Some studies have shown that faking does not affect factor structure of personality measures (Ellingson et al., 2001; Marshall et al., 2005; Michaelis & Eysenck, 1971; Smith & Ellingson, 2002; Smith et al., 2001), while other primary studies have indicated that faking behavior modifies factor structure. Some of these studies found that faking led to a decrease in the number of factors (see Ellingson et al., 1999; Pauls & Crost, 2005; Van Iddekinge et al. 2005), whereas other studies found that faking produced additional factors (Cellar et al., 1996; Schmit & Ryan, 1993). The psychometric theory of faking effects (Salgado, 2016) proposed that faking can produce both effects depending on whether faking affects a single scale (univariate range restriction) or several scales (multivariate range restriction). In the first case, faking will lead to smaller loadings in factor structure. In the second case, faking will produce additional factors. In both cases, effects of faking produce a lack of equivalence between the two conditions (honest and faking) in SS personality inventories.

In summary, the empirical evidence has clearly shown that faking is a source of variance that affects the psychometric properties of SS personality measures. Consequently, this phenomenon has a negative impact on the hiring-decisions in selection processes because individuals who better distort their answers (inflating their scores) get higher positions in the selection ranking, relegating the subjects who have answered honestly to lower positions. However, this evidence refers only to SS measures and does not necessarily apply to quasi-ipsative FC personality inventories. Therefore, it seems appropriate to examine the effects on the factor structure of quasi-ipsative FC personality inventories.

Forced-Choice Inventories

Forced-choice (FC) personality inventories are personality measures designed to reduce faking behavior and to be an alternative to traditional SS instruments (see, for instance, Baron, 1996; Bartram, 1996; Borislow, 1958). The first publications about FC inventorires dates back to the 1940s (see Gordon, 1951; Hicks, 1970; Zavala, 1965); nonetheless, the interest in examining the resistance of these measures to the effects of faking has increased over the last decades. The main characteristic of FC inventories is that each item of the questionnaire presents various alternatives (usually pairs, triads, or tetrads) with a similar degree of social desirability from which individuals must choose the alternative that describes them best and, in some cases, the alternative that least describes them. As all the answer options are socially attractive, it is more difficult for the applicants to distort the results. Consequently, their responses reflect the real choices that an individual would make. Therefore, the use of FC personality inventories should reduce the effects of faking (see Brown & Maydeu-Olivares, 2013; Christiansen et al., 2005; Converse, et al. 2006; Dilchert & Ones, 2012; Jackson et al., 2000).

Depending on how the answer-choice is made, FC personality inventories can provide three types of scores (i.e., normative, ipsative, and quasi-ipsative or partially ipsative scores), each of them with specific psychometric properties (Clemans, 1966; Hicks, 1970; Meade, 2004). FC inventories that provide normative scores are characterized by presenting only answers-alternatives of the same dimension in each item, that is, each item evaluates just one personality factor and the same alternatives are never used to represent different factors. In the case of ipsative scores, the individual must assess all the alternatives given for each item. Therefore, the score for each dimension depends on an individual’s scores on the other graded dimensions and, consequently, the sum of the scores obtained for each individual is a constant. Finally, quasi-ipsative scores include measures that do not meet all criteria to be purely ipsative but present some characteristics associated with them (Clemans, 1966). Specifically, Hicks (1970) indicated that a quasi-ipsative score is obtained when some of the following criteria apply: (1) individuals only partially order the alternatives; (2) scales have different numbers of items; (3) not all of the items ranked by respondents are scored; (4) scales are scored differently for differing respondent characteristics; (5) items differ in how they are weighted; (6) some ipsative scales are deleted when data are analyzed; and (7) the inventory includes normative sections.

Additionally, two types of quasi-ipsative FC inventories can be distinguished, depending on whether they provide algebraically dependent or non-algebraically dependent scores. In the first type, there is a metric dependence between personality scales and, therefore, there is some degree of ipsativization of scores. In the second type, the score for each personality factor is algebraically independent of the score for the other factors.

Meta-analytical evidence has shown that the three types of FC personality inventories are valid predictors of occupational and academic criteria, obtaining similar or higher effect sizes than those produced with SS measures (Bartram, 2005, 2007; Salgado, 2017; Salgado et al., 2015; Salgado & Táuriz, 2014). In particular, quasi-ipsative FC inventories stand out above SS inventories and ipsative and normative FC inventories (Lee et al., 2018; Salgado, 2017; Salgado et al., 2015; Salgado & Táuriz, 2014) even in faking response-conditions (Martínez, 2019).

Forced-choice Inventories and Effects of Faking

Empirical evidence produced about the effectiveness of FC inventories to control the effects of faking suggests that FC personality formats are valid instruments for reducing response distortion. So far, the overall results of meta-analyses by Adair (2014), Cao and Drasgow (2019), Nguyen and McDaniel (2000), and Martínez (2019) support the conclusion that FC inventories show resistance to the effects of faking in experimental and occupational settings. Although the three types of FC show resistance to faking, the FC quasi-ipsative answer format proved to be more robust against this phenomenon than ipsative and normative FC inventories (Martínez, 2019; Salgado & Lado, 2018).

Regarding the effects of faking on construct validity of FC personality inventories, this issue has not been examined in previous research. Instead, extensive research has focused on examining convergent validity (correlations between scales) between SS personality inventories and FC personality inventories under honest conditions. Findings showed evidence of the strong equivalence between both formats, pointing out that SS and FC instruments measure essentially the same personality constructs (Brown & Maydeu-Olivares, 2011, 2013; Cattell & Brennan, 1994; Joubert et al., 2015; Lee et al., 2019; Lee et al., 2018; Morillo et al, 2019; Zhang et al., 2020). Recently, Otero et al. (2020) simultaneously examined the factor structure of an SS personality questionnaire (IP/5F; Salgado, 1998) and a quasi-ipsative FC questionnaire (QI5F_tri; Salgado, 2014). The results confirmed a five-factor structure in both cases and demonstrated the validity of these inventories to assess Big Five factors.

Therefore, it becomes necessary to investigate the effects that faking can have on the factor structure of quasi-ipsative FC inventories, since previous research has revealed the suitability of this specific FC format, when compared to the other FC types, for controlling the effects of faking on scores even in real-life personnel selection settings (Cao & Drasgow, 2019; Martínez, 2019; Salgado & Lado, 2018).

Aims of the Study and Research Hypotheses

The current study aims to contribute to the knowledge of the effects of faking on the construct validity of quasi-ipsative FC inventories. Specifically, this study has two main objectives. First, to analyze the effects of faking on the factor structure of a non-dependent algebraically quasi-ipsative FC inventory; in other words, to analyze the invariance or equivalence of the measure under honest and faking conditions (Millsap, 2011). Secondly, to examine whether the experimental design (within- and between-subject designs) acts as a moderator of the magnitude of the effects of faking on the factor structure of this format.

As noted above, faking can reduce the number of factors or produce additional ones in SS measures (Salgado, 2016), depending on whether faking is univariate or multivariate. Although this issue has not been examined in FC personality measures, quasi-ipsative FC inventories have been shown to be more resistant to the effects of faking on scores and predictive validity SS measures (Martínez, 2019; Otero et al., 2020; Salgado et al., 2015; Salgado & Táuriz, 2014). Therefore, quasi-ipsative FC inventories are likely to also show resistance to the effects of faking on factor structure. Consequently, we propose the following hypothesis:

Hypothesis 1: Quasi-ipsative FC shows equivalent factor structure in honest and faking response-conditions.

Likewise, previous studies have indicated that the experimental design of the study (within-subject and between-subject designs) affects the magnitude of the effects of faking on the scores of SS questionnaires and FC formats, but these differences were smaller in effect sizes for quasi-ipsative FC inventories (Martínez, 2019; Viswesvaran & Ones, 1999). Based on the previous empirical evidence, we posit the following hypothesis:

Hypothesis 2: Quasi-ipsative FC inventories show equivalent factor structure in the within-subject design and in the between-subject design under honest and faking response-conditions.

Method

Sample and Experimental Design

Participants were 1,141 students from the University of Santiago de Compostela. The average age was 21.05 years (SD = 4.04) and 67.13% were women (766 participants). Small group sessions were organized to respond to the inventory. Participation was voluntary and all subjects provided informed consent to participate in the study.

Regarding the experimental design, 43% (n = 490) of the subjects participated in a within-subject design, in which all participants completed the questionnaire honestly and under conditions that induced them to commit faking. The remaining 57% participated in a design of two independent groups (between-subject design) in which 449 subjects completed the questionnaire in the honest condition and 202 participants under faking instructions.

Measures

QI5F_tri. Personality was evaluated with the quasi-ipsative FC questionnaire, QI5F_tri, developed by Salgado (2014). This questionnaire consists of 140 triads that evaluate the Big Five personality factors. Each of the Big Five is evaluated using 28 items and each item contains three alternatives. Each alternative can be a short sentence or an adjective. All the alternatives are similar on social desirability and are presented in positive form, that is, there are no negative alternatives. No item presents two phrases or adjectives that evaluate the same personality factor, and no item is used to measure two or more personality factors. For example, an item may include a phrase that evaluates openness to experience, another that evaluates emotional stability, and another that evaluates agreeableness. The QI5F_tri implements Horn’s (1971) strategy of quasi-ipsativation, so that the items used to evaluate a personality dimension are not used to evaluate other personality dimensions. Thus, the score for each of the Big Five is algebraically independent of the score for the other personality factors even though the format score is quasi-ipsative. In each item, individuals must choose the alternative that best describes them and the alternative that least describes them. An example of an item from this questionnaire is the following: “I am a person (a) who is open-minded; (b) who is a perfectionist; (c) who does not usually lose their temper.”

The technical manual of QI5F_tri reports that internal consistency coefficients for emotional stability (ES), extraversion (EX), openness to experience (OE), agreeableness (A), and conscientiousness (C) were .71, .73, .80, .66, and .80, respectively. The technical manual also presents test-retest reliabilities for a four-week interval as .91, .90, .79, .65, and .72 for ES, EX, OE, A, and C, respectively. Otero et al. (2020) also reported convergent-discriminant validity evidence using a SS personality inventory.

Procedure

The quasi-ipsative FC inventory was used under two experimental response-conditions: honest and faking. In the honest condition, participants followed the instructions that are described below:

In the following questionnaire you will be presented with sets of phrases grouped into triads. Try to rank them by first identifying the one that best describes you, the one that second best describes you, and finally the one that describes you least. In each item, mark a plus sign (+) next to the phrase that best describes you and a minus sign (-) next to the phrase that least describes you. You should leave blank the one you considered second.

For the faking condition, test instructions were slightly modified in such a way that participants were encouraged to fake. The instructions were as follows:

Next, you will be presented with sets of phrases grouped into triads. Try to rank them by first identifying the one that best describes you, the one that second best describes you, and finally the one that describes you least. When answering, imagine that you are in the last step of a selection process for a very attractive job. Since it offers you a great opportunity to advance your professional career, you want to get that job. To do this, you must answer this test trying to give the best image of yourself to get that job. In each item, mark a plus sign (+) next to the alternative that best describes you and a minus sign (-) next to the alternative that least describes you. You should leave blank the one you considered second best.

In both conditions, the questionnaire was administered (1) in paper-and-pencil format and (2) in computer format (using the Inquisit program; Millisecond, 2016). In both cases, instructions were the same regardless of the administration format that was used, and the two formats were not mixed in the same small group session. In addition, participants only had access to the questionnaire during the time they attended the study and they responded using only one of the administration formats.

Statistical Analyses

For the statistical analyses, the 140 items that make up the QI5F_tri questionnaire were grouped arbitrarily (it was not done randomly in the strictest statistical sense) into 20 clusters or compounds. Each cluster included 7 items from the instrument that evaluated the same personality factor. Thus, the 28 items that evaluated each personality factor were divided into 4 compounds of 7 items each one. This item grouping procedure was applied to the data obtained in the two experimental conditions (honest and faking) of the different samples analyzed (total, within- and between-subject sample). Table 1 shows the compounds created.

Table 1.  Compounds of the Quasi-ipsative FC Inventory 

Factor Analyses

To examine whether there is measurement invariance in the structure of the quasi-ipsative FC inventory, a series of non-restrictive factor analyses (that is, EFA) was carried out using the responses of participants under honest and faking conditions. Non-restrictive factor analysis refers to EFA (exploratory factor analysis) following recommendations by Ferrando and Anguiano-Carrasco (2010), who argue that this name is more appropriate for defining the model that is being tested in this type of factor analysis. In this sense, as they recommend, a non-restrictive factor analysis was carried out but with a confirmatory purpose, since two hypotheses are being tested.

Non-restrictive factor analyses were implemented using maximum likelihood as a model fitting procedure. To carry out these analyses, the Factor program by Lorenzo-Seva and Ferrando (2018) was used because it allows us to obtain absolute indices of model fit (such as root mean square of residuals, RMSR), relative model fit indices (for instance, root mean square error of approximation, RMSEA; non-normed fit index, NNFI; or comparative fit index, CFI) and to test the equivalence of the model. Pearson correlation matrix was used, five-factors were retained and Varimax rotation was applied to obtain orthogonal factors, in line with the theoretical framework of the Five-Factor model.

Factor analyses were also replicated using the principal component analysis (PCA) method, due to the fact that some authors, such as Costa and McCrae (1992) and Goldberg (1992), argue that PCA may be more appropriate than non-restrictive factor analyses for establishing the structure of personality inventories. The Factor program by Lorenzo-Seva and Ferrando (2018) was also used to perform these analyses, five factors were retained and Varimax rotation was applied.

Moreover, with the aim of verifying data fit to the proposed factor structure of the Five-Factor model, restrictive factor analyses (i.e., confirmatory factor analyses, CFA) were conducted. In this case, Lisrel 8 program (Jöreskog & Sörbom, 1998) was used, applying the maximum likelihood method. Since results were virtually the same in all cases, we have shown only the non-restrictive factor analyses solution.

Structure Congruence Analyses

In addition, following the recommendations by Ferrando and Lorenzo-Seva (2014), factor congruence coefficients were calculated in order to assess to what extent rotation adjustment was congruent with the target matrix proposed. That is, an analysis of factor congruence was conducted to establish the degree of similarity between factors that measure the same constructs (convergent factors) and the degree of non-similarity between factors that represent different constructs (divergent factors) in honest and faking response conditions.

Burt-Tucker’s coefficients of congruence (rc; Tucker, 1951) were used to calculate the factor congruence, following the recommendation by Cattell (1971). This coefficient allows us to know the similarity of the load pattern, because if coefficients obtained between convergent factors are high (around .90) they indicate that paired factors are similar, particularly if coefficients between divergent factors are low (less than .40). For its calculation, factor loadings of rotated matrices have been used.

Results

Non-restrictive Factor Analyses of the QI5F_tri Structure in the Total Sample

Table 2 shows the matrix of rotated loads in honest and faking conditions for the whole sample. With respect to the honest condition, all clusters had their highest positive loading in the factor on which they would be expected to load. Thus, the four compounds of emotional stability loaded together on factor 3, the four corresponding to extraversion loaded together on factor 1, the four compounds of openness to experience loaded together on factor 4, the four of agreeableness loaded together on factor 2, and the four compounds of conscientiousness loaded together on factor 5. Furthermore, it can be seen that none of the factors had other relevant positive loads and the negative loads were all of small magnitude. Concerning the results of non-restrictive factor analyses carried out with the responses under instructions that induced faking, the structure of five orthogonal factors was also supported by the data, although there are slight differences compared to the honest condition. Specifically, 18 of the 20 factored clusters had their highest positive loading in the factor on which they would be expected to load. The four compounds of emotional stability loaded together on factor 4, the four corresponding to extraversion loaded together on factor 3, the four openness to experience compounds loaded together on factor 1, and the four corresponding to conscientiousness loaded together on factor 2. However, only two of the agreeableness compounds (.605 for C15 and .272 for C16) had a relevant loading on the corresponding factor and the other two had a negative loading on conscientiousness. Finally, it can be observed that none of the factors had relevant additional positive loadings and that negative loadings were all of small magnitude, except for the two above mentioned agreeableness compounds.

Table 2.  Rotated Factor Loadings of Non-restrictive Factor Analyses for the Honest and Faking Conditions Using the Total Sample 

Table 3 shows the goodness of fit statistics of the model of five orthogonal factors for honest and faking conditions. Absolute fits (for example, RMSR), relative fits (e.g., RMSEA, CFI, NNFI), and equivalence statistics presented a magnitude that falls between typical good and excellent fit cut-off points for both experimental response-conditions. All estimators showed an excellent model fit in all cases, being even slightly better for faking than honest response conditions. For example, RMSEA values were .031 and .026, NNFI values were .979 and .977, and CFI values were .989 and .988, for honest and faking conditions, respectively. Also, the size of the t statistic to measure the significance of RMSEA and CFI values indicated that they were statistically significant in both cases. Moreover, RMSR absolute fit indicator was .0177, 54.1% smaller than the expected value of .0327 for the honest condition, and .0209, 55% smaller than the expected value of .0380, in the case of the faking instructions.

Table 3.  Goodness-of-fit Indicators of the Big Five Model of Honest and Faking Conditions Using the Total Sample 

In summary, as a whole, the results of the examination of QI5F_tri factor structure under honest and faking response-conditions for the total sample indicated that data fit a five-factor structure and that model fit indices are good or excellent in both response-conditions. Consequently, these results provide support to the invariance of the factor structure of this inventory under honest and faking conditions.

Non-restrictive Factor Analyses of the QI5F_tri Structure in the within-subject Design

The following two factor analyses presented in Table 4 were carried out using a within-subject design in order to examine if the type of experimental design can produce differences in the results of factor analyses. These analyses were carried out using a subsample of the total sample.

Table 4.  Rotated Factor Loadings of Non-restrictive Factor Analyses for the Honest and Faking Conditions Using the Within-subject Design (n = 490) 

Results of the factor loads matrix in the honest condition replicated the factor structure shown for the total sample with notable similarity and, therefore, comments made in the case of the total sample are entirely applied to the present case. That is, all compounds had their highest positive loading on the expected factor. Thus, the five-factor structure has been supported by the data. In the case of the factor load matrix obtained under faking conditions for this design, results were very similar to those obtained for the same condition with the total sample. The five-factor structure was clearly supported by the data, although the magnitude of loads was slightly less in faking conditions than in honest conditions. Moreover, again, two of the agreeableness compounds, C13 and C14, showed reduced loadings on the factor that they would be expected to load (factor 3), with values of .034 and .267, respectively, and negative loadings on the consciousness factor (factor 2), with values of -.671 and -.483 for these clusters.

Table 5.  Goodness-of-fit Indicators of the Big Five Model of Honest and Faking Conditions Using the Within-Subject Design 

Model fit indices obtained in the within-subject design are summarized in Table 5. In the honest condition, model fit values found were similar or slightly better than values found in the total sample, although differences have no practical relevance since they occurred in the thousandth values. In the faking response condition, indices were also good or excellent, although slightly lower than those of the total sample. Then, it must be taken into account that the sample size was much smaller in the present case, with results in greater sampling error.

Non-restrictive Factor Analyses of the QI5F_tri Structure in the between-Subject Design

Finally, the last two non-restrictive factor analyses, presented in Table 6 were carried out with a between-subject design, which allows us to verify the potential effect of the research design on the fit of the model, by comparing the results of these analyses with results of the within-subject design.

Table 6.  Rotated Factor Loadings of Non-restrictive Factor Analyses for the Honest and Faking Conditions Using the Between-subject Design 

Again, in the honest condition, results showed that all the clusters loaded on their hypothetical factor, with relevant and positive loadings, and that they did not present positive secondary loadings on the remaining factors. It can also be seen that the remaining loadings for each factor were not significant and most of them were negative. The results under faking instructions, once again, showed that data fits a factor structure of five orthogonal factors. However, three of the compounds (C1, C13, and C14) showed a factor loading which was not significant on the factor that they would be expected to load, although they did not load significantly on any other factor.

Table 7.  Goodness-of-fit Indicators of the Big Five Model of Honest and Faking Conditions Using the Between-Subject Design 

Model fit indices for the between-subject design are presented in Table 7. Values obtained were good or excellent, with magnitudes similar to those of previous analyses. In particular, the RMSEA index should be highlighted, whose value showed a better fit under counterfeiting conditions than under honest conditions (.023 vs. .035). Therefore, in general, this latter analysis also provides empirical support for the five-factor structure and for the equivalence (invariance) of this structure under honest and faking conditions.

Structure Congruence Analyses of the QI5F_tri

Table 8 presents Burt-Tucker’s congruence coefficients calculated for each pair of factors in each of the two experimental conditions analysed (honest and faking). The top of the table shows Burt-Tucker’s congruence coefficients for the faking condition and coefficients for the honest condition are presented at the bottom of the table. As can be seen, the results found showed very low coefficients between divergent factors in both honest and faking conditions. In all cases, values of Burt-Tucker coefficients between divergent factors were equal to or less than .40. Therefore, based on these results, factor structure remains stable in the FC inventory despite the effects of faking.

Table 8.  Burt-Tucker’s Congruence Coefficients among Personality Factors for each Response-Condition 

Note. n honest sample = 939; n faking sample = 692. The values under honest conditions appear below the diagonal and the values under faking conditions appear above the diagonal; ES = emotional stability; EX = extraversion; OE = openness to experience; A = agreeableness; C = conscientiousness.

Finally, the stability of factor congruence between convergent factors of different experimental conditions (honest vs. faking) was also analyzed. Table 9 reports the results obtained. Burt-Tucker’s coefficients show much higher values between convergent factors of different experimental conditions than between divergent factors. On average, the coefficient of congruence of convergent components was .90, while average of divergent factors was .14 in absolute terms and -.10 in relative terms. In summary, these results demonstrate that the quasi-ipsative FC inventory factor structure is stable and robust against the effects of faking.

Table 9.  Burt-Tucker’s Congruence Coefficients among Personality Factors between the Honest and the Faking Conditions 

Note. n honest sample = 939; n faking sample = 692; ES = emotional stability; EX = extraversion; OE = openness to experience; A = agreeableness; C = conscientiousness.

Discussion

The invariance or equivalence of the measure establishes that measures in two or more groups or in two or more conditions are comparable (Millsap, 2011; Jiang et al., 2017). In the current study, the analysis of measure equivalence was conducted in order to determine if the the Five Factor measure evaluated with a quasi-ipsative FC inventory changes depending on whether the personality inventory is completed under honest or faking instructions. If there is not measurement equivalence, this would indicate that response conditions can alter construct measurement and, consequently, the instrument could not measure the same construct under honest and faking conditions. In this sense, it is pertinent in the field of Work and Organizational (W/O) Psychology to analyze this issue, particularly in the case of personality measures used in personnel selection processes since there is the possibility that applicants might voluntarily distort their answers to show a more positive self-image.

This research has contributed to the personality and faking literature in several ways. First, this is the first study that has examined measurement invariance of a FC inventory that produces algebraically non-dependent quasi-ipsative scores under faking conditions. In addition, this study is relevant because it examined measure invariance in three sub-samples and with two experimental designs (within-subject and between-subject design), which allowed us to determine the potential moderating effect of the study design on the results. So, this is the second contribution of this study.

The third contribution of this research is that examination was not limited to performing a single factor analysis; rather, three types of latent structure analyses were carried out. Non-restrictive factor analyses (maximum likelihood method), principal component analyses, and restrictive factor analyses were performed. Moreover, goodness of fit indices and Burt-Tucker’s congruence coefficients were calculated to obtain a more robust index of similarities between factor structures of both conditions. Therefore, it is the most exhaustive study on this issue ever carried out with quasi-ipsative FC inventories.

The hypothesis that was tested proposed that faking behavior would not affect the factor structure of the quasi-ipsative FC inventory (Hypothesis 1). Likewise, since previous research has shown that the magnitude of faking can be affected by the type of experimental design (Martínez, 2019; Viswesvaran & Ones, 1999), a more specific analysis of the effects of faking on factor structure comparing a within-subject design and a between-subjects design was proposed (Hypothesis 2).

The findings support the robustness of the stability of the five-factor structure in the quasi-ipsative FC personality questionnaire under faking conditions. The results of the non-restrictive factor analyses showed in all cases a factor structure of five components. Fit indices obtained in these analyses showed the good fit of the model to the data. Likewise, the results of confirmatory analyses showed acceptable fit indices, both in the honest condition and in the faking condition of the two samples. These findings allow us to affirm that data fitted the proposed Five-Factor model and, therefore, factor structure remains stable even in faking conditions. This is also a unique contribution of the current study.

Furthermore, Burt-Tucker’s congruence coefficients have been reported in order to provide a more robust indicator of similarities between factor structures in both experimental conditions. The results leave no room for doubt, as the values obtained showed the robustness of factor structure in FC quasi-ipsative inventories in both honest and faking conditions, even in within-subject experimental designs. When a quasi-ipsative FC inventory is used, factor structure shows convergency between honest and the faking response-conditions. Hence, this is the fifth contribution of this study.

Finally, it should also be considered that analyses were carried out under conditions that favor discrepancy in factor structures since item clusters were created through an arbitrary assignment. The use of proper compounds Big Five facets measured by the QI5F_tri would have undoubtedly contributed to a greater equivalence (invariance) of the measure and better adjustment statistics, because measurement errors and the variance of specific factors would have been reduced. However, in this research Big Five facets were not used to establish a potentially higher disagreement between honest and faking conditions. Therefore, it is another important contribution of this study, because it shows the robust structure of this inventory in one of the worst case scenarios.

Theoretical and Practical Implications

The findings of the current study have relevant implications for research and professional practice. From a theoretical point of view, this is the first study that provides empirical evidence of the effects of faking on construct validity of a quasi-ipsative FC that provides non-algebraically dependent scores. The results obtained suggest that this type of FC questionnaire is a robust instrument that controls faking effects on factor structure. Hence, these findings provide evidence of measure invariance in a quasi-ipsative FC questionnaire without algebraic dependence in faking conditions.

This study has also examined the moderating effect of the study design on the magnitude of the effects of faking on construct validity. Results shows that the consequence of faking on factor structure are equal regardless of the experimental design. Therefore, these findings, suggest that in the case of the effects of faking on the validity construct of a quasi-ipsative FC inventory the type of design does not act as a moderator.

The findings reported in this study have also implications from a practical perspective. The use of quasi-ipsative FC personality measures (without algebraic dependence) are recommended in applied contexts since the robustness of the stability of the five-factor structure in this quasi-ipsative FC inventory under faking response-instructions was supported. Therefore, using a quasi-ipsative FC questionnaire without algebraic dependence instead of an SS personality measure for high-risk decisions would allow for greater control of faking.

Limitations of the Study and Future Research

It should be noted that this study has also some limitations. First, this study was carried out in an experimental laboratory context. In this sense, it would be useful to replicate this research with other types of samples and in other contexts (e.g., candidates vs. employees) in order to explore if results vary and to obtain greater support and evidence for present results.

Second, this study has been performed with a quasi-ipsative FC personality measure that provides scores without algebraic dependence. Hence, results cannot be generalized to other types of quasi-ipsative FC inventories, that is, results obtained could be different for other quasi-ipsative formats if they provide algebraically dependent scores. Although it could be speculated, based on these findings, that the quasi-ipsative FC format as a whole is resistant to modifications of the structure that faking produces in SS formats, it would be worthwhile to examine whether these results would be replicated with quasi-ipsative FC measures with algebraic dependence in order to provide evidence of the effects of faking on other quasi-ipsative FC inventories. Likewise, it would be useful to analyze this issue in the other FC inventory types, normative and ipsative, so as to examine whether results obtained would vary depending on FC psychometric characteristics.

Conclusions

The current study has contributed to the knowledge of the effects of faking on FC personality measures providing evidence of robustness against the effects of faking on construct validity of a quasi-ipsative FC inventory without algebraic dependence. Specifically, this study shows (1) the quasi-ipsative FC algebraically non-dependent present invariance of the measure between honest and faking response-conditions, (2) that factor congruence between convergent factors in both response-conditions is very high, obtaining high model fit indices in all cases, and (3) that the the type of experimental design is not a moderator of the effects of faking on construct validity. These findings have practical relevance for the assessment of personality, especially in the area of personnel selection, where hiring decisions are more frequently affected by faking, supporting the use of quasi-ipsative FC questionnaires without algebraic dependence in order to control the effects of faking.

Finally, we encourage future researchers to replicate this study and expand these contributions using samples from different countries and other FC personality measures.

Cite this article as: Martínez, A., Moscoso, S., & Lado, M. (2021). Faking effects on the factor structure of a quasi-ipsative forced-choice personality inventory. Journal of Work and Organizational Psychology, 37(1), 1-10. https://doi.org/jwop2021a7

References

Adair, C. (2014). Interventions for addressing faking on personality Assessments for employee selection: A meta-analysis (unpublished doctoral dissertation). DePaul University. [ Links ]

Alonso, P., Moscoso, S., & Cuadrado, D. (2015). Procedimientos de selección de personal en pequeñas y medianas empresas españolas. Journal of Work and Organizational Psychology, 31(2), 79-89. https://doi.org/10.1016/j.rpto.2015.04.002 [ Links ]

Baron, H. (1996). Strengths and limitation of ipsative measurement. Journal of Occupational and Organizational Psychology, 69(1), 49-56. https://doi.org/10.1111/j.2044-8325.1996.tb00599.x [ Links ]

Barrick, M. R., Mount, M. K., & Judge, T. A. (2001). Personality and performance at the beginning of the new millennium: What do we know and where do we go next? International Journal of Selection and Assessment, 9(1-2), 9-30. https://doi.org/10.1111/1468-2389.00160 [ Links ]

Bartram, D. (1996). The relationship between ipsatized and normative measures of personality. Journal of Occupational and Organizational Psychology, 69(1), 25-39. https://doi.org/10.1111/j.2044-8325.1996.tb00597.x [ Links ]

Bartram, D. (2005). The Great Eight competencies: A criterion-centric approach to validation. Journal of Applied Psychology, 90(6), 1185-1203. https://doi.org/10.1037/0021-9010.90.6.1185 [ Links ]

Bartram, D. (2007). Increasing validity with forced-choice criterion measurement formats. International Journal of Selection and Assessment, 15(3), 263-272. https://doi.org/10.1111/j.1468-2389.2007.00386.x [ Links ]

Birkeland, S. A., Manson, T. M., Kisamore, J. L., Brannick, M. T., & Smith, M. A. (2006). A meta-analytic investigation of job applicant faking on personality measures. International Journal of Selection and Assessment, 14(4), 317-335. https://doi.org/10.1111/j.1468-2389.2006.00354.x [ Links ]

Borislow, B. (1958). The Edwards Personal Preference Schedule (EPPS) and fakability. Journal of Applied Psychology, 42(1), 22-27. [ Links ]

Brown, A., & Maydeu-Olivares, A. (2011). Item response modeling of forced-choice questionnaires. Educational and Psychological Measurement, 71(3), 460–502. [ Links ]

Brown, A., & Maydeu-Olivares, A. (2013). How IRT can solve problems of ipsative data in forced-choice questionnaires. Psychological Methods, 18(1), 36-52. https://doi.org/10.1037/a0030641 [ Links ]

Cao, M., & Drasgow, F. (2019). Does forcing reduce faking? A meta-analytic review of forced-choice personality measures in high-stakes situations. Journal of Applied Psychology, 104(11), 1347-1368. https://doi.org/10.1037/apl0000414 [ Links ]

Cattell, R. B., & Brennan, J. (1994). Finding personality structure when ipsative measurements are the unavoidable basis of the variables. American Journal of Psychology, 107(2), 261-274. https://doi.org/10.2307/1423040 [ Links ]

Cellar, D. F., Miller, M. L., Doverspike, D. D., & Klawsky, J. D. (1996). Comparison of factor structures and criterion-related validity coefficients for two measures of personality based on the five factor model. Journal of Applied Psychology, 81(6), 694-704. https://doi.org/10.1037/0021-9010.81.6.694 [ Links ]

Christiansen, N. D., Burns, G. N., & Montgomery, G. E. (2005). Reconsidering forced-choice item formats for applicant personality assessment. Human Performance, 18(3), 267-307. https://doi.org/10.1207/s15327043hup1803_4 [ Links ]

Clemans, W. V. (1966). An analytical and empirical examination of some properties of ipsative measures. Psychometric Monographs, 14, 1-56. [ Links ]

Converse, P. D., Oswald, F. L., Imus, A., Hedricks, C., Roy, R., & Butera, H. (2006). Forcing choices in personality measurement. In R. L. Griffith & M. H. Peterson (Eds.), A closer examination of applicant faking behavior (pp. 263-282). Information Age. [ Links ]

Costa, P. T., Jr., & McCrae, R. R. (1992). Four ways five factors are basic. Personality and Individual Differences, 13(6), 653-665. https://doi.org/10.1016/0191-8869(92)90236-I [ Links ]

Cuadrado D., Salgado J. F., & Moscoso S. (2020). Individual differences and counterproductive academic behaviors in high school. Plos One, 15(9). https://doi.org/10.1371/journal.pone.0238892 [ Links ]

Cuadrado, D., Salgado, J. F., & Moscoso, S. (2021). Personality, intelligence, and counterproductive academic behaviors: A meta-analysis. Journal of Personality and Social Psychology, 120(2), 504–537. https://doi.org/10.1037/pspp0000285 [ Links ]

Delgado-Rodríguez, N., Hernández-Fernaud, E., Rosales, C., Díaz-Vilela, L., Isla-Díaz, R., & Díaz-Cabrera, D. (2018). Contextual performance in academic settings: The role of personality, self-efficacy, and impression management. Journal of Work and Organizational Psychology, 34(2), 63-68. https://doi.org/10.5093/jwop2018a8 [ Links ]

Dilchert, S., & Ones, D. S. (2012). Measuring and improving environmental sustainability. In S. E. Jackson, D. S. Ones, & S. Dilchert (Eds.), Managing HR for environmental sustainability, (pp. 187-221). Jossey-Bass/Wiley. [ Links ]

Ellingson, J. E., Sackett, P. R., & Hough, L. M. (1999). Social desirability corrections in personality measurement: Issues of applicant comparison and construct validity. Journal of Applied Psychology, 84(2), 155-166. https://doi.org/10.1037/0021-9010.84.2.155 [ Links ]

Ellingson, J. E., Smith, D. B., & Sackett, P. R. (2001). Investigating the influence of social desirability on personality factor structure. Journal of Applied Psychology, 86(1), 122-133. [ Links ]

Ferrando, P. J., & Anguiano-Carrasco, C. (2010). El análisis factorial como técnica de investigación en psicología. Papeles del Psicólogo, 31(1), 18-33. [ Links ]

Ferrando, P. J., & Lorenzo-Seva, U. (2014). Exploratory item factor analysis: Additional considerations. Anales de Psicología, 30(3), 1170-1175. https://doi.org/10.6018/analesps.30.3.199991 [ Links ]

García-Izquierdo, A. L., Aguado, D., & Ponsoda-Gil, V. (2019). New insights on technology and assessment: Introduction to JWOP special issue. Journal of Work and Organizational Psychology, 35(2), 49-52. https://doi.org/10.5093/jwop2019a6 [ Links ]

García-Izquierdo, A. L., Ramos-Villagrasa, P. J., & Lubiano, M. A. (2020). Developing biodata for public manager selection purposes: A comparison between fuzzy logic and traditional methods. Journal of Work and Organizational Psychology, 36(3), 231-242. https://doi.org/10.5093/jwop2020a22 [ Links ]

Goldberg, L. R. (1992). The development of markers for the Big-Five factor structure. Psychological assessment, 4(1), 26-42. https://doi.org/10.1037/1040-3590.4.1.26 [ Links ]

Golubovich, J., Lake, C. J., Anguiano-Carrasco, C., & Seybert, J. (2020). Measuring achievement striving via a situational judgment test: The value of additional context. Journal of Work and Organizational Psychology, 36(2), 157-167. https://doi.org/10.5093/jwop2020a15 [ Links ]

Gordon, L. V. (1951). Validities of the forced-choice and inventory methods of personality measurement. Journal of Applied Psychology, 35(6), 407-412. https://doi.org/10.1037/h0058853 [ Links ]

Griffith, R. L., & McDaniel, M. (2006). The nature of deception and applicant faking behavior. In R. L. Griffith & M. H. Peterson (Eds.), A closer examination of applicant faking behavior (pp. 113-148). Information Age. [ Links ]

Heller, M. (2005). Court ruling that employer’s integrity test violated ADA could open door to litigation. Workforce Management, 84(9), 74-77. [ Links ]

Hicks, L. E. (1970). Some properties of ipsative, normative, and forced-choice normative measures. Psychological Bulletin, 74(3), 167-184. https://doi.org/doi.org/10.1037/h0029780 [ Links ]

Hooper, A. C. (2007). Self-presentation on personality measures in lab and field settings: A meta-analysis (unpublished doctoral dissertation). University of Minnesota. [ Links ]

Horn, J. L. (1971). Motivation and dynamic calculus concepts from multivariate experiment. In R. B. Cattell (Ed.), Handbook of Multivariate Experimental Psychology (2nd printing, pp. 611-641). Tand McNally. [ Links ]

Jackson, D. N., Wroblewski, V. R., & Ashton, M. C. (2000). The impact of faking on employment tests: Does forced choice offer a solution? Human Performance, 13(4), 371-388. https://doi.org/10.1207/S15327043HUP1304_3 [ Links ]

Jiang, G., Mai, Y., & Yuan, K. H. (2017). Advances in measurement invariance and mean comparison of latent variables: Equivalence testing and a projection-based approach. Frontiers in Psychology, 8, 1823. https://doi.org/10.3389/fpsyg.2017.01823 [ Links ]

Jöreskog, K., & Sörbom, D. (1998). LISREL (version 8.20) [computer software]. Scientific Software Inc. [ Links ]

Joubert, T., Inceoglu, I., Bartram, D., Dowdeswell, K., & Lin, Y. (2015). A comparison of the psychometric properties of the forced choice and Likert scale versions of a personality instrument. International Journal of Selection and Assessment, 23(1), 92-97. https://doi.org/10.1111/ijsa.12098 [ Links ]

Judge, T. A., Rodell, J. B., Klinger, R. L., Simon, L. S., & Crawford, E. R. (2013). Hierarchical representations of the five-factor model of personality in predicting job performance: Integrating three organizing frameworks with two theoretical perspectives. Journal of Applied Psychology, 98(6), 875-925. https://doi.org/10.1037/a0033901 [ Links ]

Lado, M., & Alonso, P. (2017). The Five-Factor model and job performance in low complexity jobs: A quantitative synthesis. Journal of Work and Organizational Psychology, 33(3), 175-182. https://doi.org/10.1016/j.rpto.2017.07.004 [ Links ]

Lee, P., Joo, S.-H., & Lee (2019). Examining of stability of personality profile solutions between like-type and multidimensional forced choice measure. Personality and Individual Differences, 142, 13-20. [ Links ]

Lee, P., Lee, S., & Stark, S. (2018). Examining validity evidence for multidimensional forced choice measures with different scoring approaches. Personality and Individual Differences, 123, 229-235. https://doi.org/10.1016/j.paid.2017.11.031 [ Links ]

Lorenzo-Seva, U., & Ferrando, P. J. (2018). FACTOR (version 10.8. 02) [computer software]. Universidad Rovira i Virgili. [ Links ]

Marshall, M. B., De Fruyt, F., Rolland, J. P., & Bagby, R. M. (2005). Socially desirable responding and the factorial stability of the NEO PI-R. Psychological Assessment, 17(3), 379-384. https://doi.org/10.1037/1040-3590.17.3.379 [ Links ]

Martínez, A. (2019). Evaluación empírica de un modelo teórico de los efectos del faking sobre las medidas de personalidad ocupacional [Empirical assessment of a theoretical model of the effects of faking on the scores of occupational personality] (unpublished doctoral dissertation). University of Santiago de Compostela. [ Links ]

McFarland, L. A., & Ryan, A. M. (2000). Variance in faking across noncognitive measures. Journal of Applied Psychology, 85(5), 812-821. https://doi.org/10.1037/0021-9010.85.5.812 [ Links ]

Meade, A.W. (2004). Psychometric problems and issues involved with creating and using ipsative measures for selection. Journal of Occupational and Organizational Psychology, 77(4), 531-552. [ Links ]

Michaelis, W., & Eysenck, H. J. (1971). The determination of personality inventory factor patterns and intercorrelations by changes in real-life motivation. The Journal of Genetic Psychology, 118(2), 223-234. https://doi.org/10.1080/00221325.1971.10532611 [ Links ]

Millisecond. (2016) Inquisit (version 5.0.6.0) [computer software]. Millisecond Software. https://www.millisecond.com/products/inquisit6/weboverview.aspxLinks ]

Millsap, R. E. (2011). Statistical approaches to measurement invariance. Routledge. [ Links ]

Morillo, D., Abad, F. J., Kreitchmann, R. S., Leenen, I., Hontangas, P., & Ponsoda, V. (2019). The journey from Likert to forced-choice questionnaires: Evidence of the invariance of item parameters. Journal of Work and Organizational Psychology, 35(2), 75-83. https://doi.org/10.5093/jwop2019a11 [ Links ]

Nguyen, N. T., & McDaniel, M. A. (2000, diciembre). Brain size and intelligence: A meta-analysis [Oral communication]. First Annual Conference of the International Society of Intelligence Research. Cleveland, OH, United States. [ Links ]

Otero, I., Cuadrado, D., & Martínez, A. (2020). Convergent and predictive validity of the Big Five factors assessed with single stimulus and quasi-ipsative questionnaires. Journal of Work and Organizational Psychology, 36(3), 215-222. https://doi.org/10.5093/jwop2020a17 [ Links ]

Paulhus, D. L. (2002). Socially desirable responding: The evolution of a construct. In H. Braun, D. N. Jackson, & D. E. Wiley (Eds.), The role of constructs in psychological and educational measurement (pp. 67-88). Lawrence Erlbaum Associates, Inc. [ Links ]

Pauls, C. A., & Crost, N. W. (2005). Effects of different instructional sets on the construct validity of the NEO-PI-R. Personality and Individual Differences, 39(2), 297-308. https://doi.org/10.1016/j.paid.2005.01.003 [ Links ]

Poropat, A. E. (2009). A meta-analysis of the five-factor model of personality and academic performance. Psychological Bulletin, 135(2), 322-338. [ Links ]

Rosse, J. G., Stecher, M. D., Miller, J. L., & Levin, R. A. (1998). The impact of response distortion on preemployment personality testing and hiring decisions. Journal of Applied Psychology, 83(4), 634-644. https://doi.org/10.1037/0021-9010.83.4.634 [ Links ]

Rothstein, M. G., & Goffin, R. D. (2006). The use of personality measures in personnel selection: What does current research support? Human Resource Management Review, 16(2), 155-180. https://doi.org/10.1016/j.hrmr.2006.03.004 [ Links ]

Sackett, P. R., Lievens, F., Van Iddekinge, C. H., & Kuncel, N. R. (2017). Individual differences and their measurement: A review of 100 years of research. Journal of Applied Psychology, 102(3), 254-273. https://doi.org/10.1037/apl0000151 [ Links ]

Salgado, J. F. (1997). The Five Factor model of personality and job performance in the European Community. Journal of Applied Psychology, 82(1), 30-43. https://doi.org/10.1037/0021-9010.83.4.634 [ Links ]

Salgado, J. F. (2003). Predicting job performance using FFM and non-FFM personality measures. Journal of Occupational and Organizational Psychology, 76(3), 323-346. https://doi.org/10.1348/096317903769647201 [ Links ]

Salgado, J. F. (1998). Manual técnico del inventario de personalidad de cinco factores (IP/5F) [Manual of the Five Factors of Personality Inventory IP/5F]. Tórculo. [ Links ]

Salgado, J. F. (2014). Reliability, construct, and criterion validity of the Quasi-Ipsative Personality Inventory (QI5F/Tri) (unpublished manuscript). University of Santiago de Compostela. [ Links ]

Salgado, J. F. (2016). A theoretical model of psychometric effects of faking on assessment procedures: Empirical findings and implications for personality at work. International Journal of Selection and Assessment, 24(3), 209-228. https://doi.org/10.1111/ijsa.12142 [ Links ]

Salgado, J. F. (2017). Moderator effects of job complexity on the validity of forced-choice personality inventories for predicting job performance. Revista de Journal of Work and Organizational Psychology, 33(3), 229-238. https://doi.org/10.1016/j.rpto.2017.07.001 [ Links ]

Salgado, J. F., Anderson, N., & Moscoso, S. (2020). Personality at work. In P. J. Corr & G. Matthews (Eds.), The Cambridge handbook of personality psychology (pp. 427–438). Cambridge University Press. [ Links ]

Salgado, J. F., Anderson, N., & Tauriz, G. (2015). The validity of ipsative and quasi-ipsative forced-choice personality inventories for different occupational groups: A comprehensive meta-analysis. Journal of Occupational and Organizational Psychology, 88(4), 797-834. https://doi.org/10.1111/joop.12098 [ Links ]

Salgado, J. F., & Lado, M. (2018). Faking resistance of a quasi-ipsative forced-choice personality inventory without algebraic dependence. Revista de Psicología del Trabajo y de las Organizaciones/Journal of Work and Organizational Psychology, 34(3), 213-216. https://doi.org/10.5093/jwop2018a23 [ Links ]

Salgado, J. F., Moscoso, S., & Anderson, N. (2013). Personality and counterproductive work behavior. In N. D. Christiansen & R. P. Tett (Eds.), Handbook of personality at work (pp. 606-632). Routledge. [ Links ]

Salgado, J. F., & Tauriz, G. (2014). The Five-Factor model, forced-choice personality inventories and performance: A comprehensive meta-analysis of academic and occupational validity studies. European Journal of Work and Organizational Psychology, 23(1), 3-30. https://doi.org/10.1080/1359432X.2012.716198 [ Links ]

Schmit, M. J., & Ryan, A. M. (1993). The Big Five in personnel selection: Factor structure in applicant and nonapplicant populations. Journal of Applied Psychology, 78(6), 966-974. https://doi.org/10.1037/0021-9010.78.6.966 [ Links ]

Smith, D. B., & Ellingson, J. E. (2002). Substance versus style: A new look at social desirability in motivating contexts. Journal of Applied Psychology, 87(2), 211-219. https://doi.org/10.1037/0021-9010.87.2.211 [ Links ]

Smith, D. B., Hanges, P. J., & Dickson, M. W. (2001). Personnel selection and the five-factor model: Reexamining the effects of applicant’s frame of reference. Journal of Applied Psychology, 86(2), 304-315. https://doi.org/10.1037/0021-9010.86.2.304 [ Links ]

Stark, S., Chernyshenko, O. S., Chan, K. Y., Lee, W. C., & Drasgow, F. (2001). Effects of the testing situation on item responding: Cause for concern. Journal of Applied Psychology, 86(5), 943-953. https://doi.org/10.1037/0021-9010.86.5.943 [ Links ]

Tucker, L. R. (1951). A method for synthesis of factor analysis studies. Personnel Research Section Report, 984. Department of the Army. Washington DC. [ Links ]

Van Iddekinge, C. H., Raymark, P. H., & Roth, P. L. (2005). Assessing personality with a structured employment interview: Construct-related validity and susceptibility to response inflation. Journal of Applied Psychology, 90(3), 536-552. https://doi.org/10.1037/0021-9010.90.3.536 [ Links ]

Viswesvaran, C., & Ones, D. S. (1999). Meta-analyses of fakability estimates: Implications for personality measurement. Educational and Psychological Measurement, 59(2), 197-210. https://doi.org/10.1177/00131649921969802 [ Links ]

Zavala, A. (1965). Development of the forced-choice rating scale technique. Psychological Bulletin, 63(2), 117-124. https://doi.org/10.1037/h0021567 [ Links ]

Zhang, B., Sun, T., Drasgow, F., Chernyshenko, O. S., Nye, C. D., Stark, S., & White, L. A. (2019). Though forced, still valid: Psychometric equivalence of forced-choice and single-statement measures. Organizational Research Methods, 23(3), 569-590. https://doi.org/10.1177/1094428119836486 [ Links ]

Ziegler, M., MacCann, C., & Roberts, R. D. (2012). Faking: Knowns, unknowns, and points of contention. In M. Ziegler, C. MacCann, & R. D. Roberts (Eds.), New perspectives on faking in personality assessment (pp. 3-16). Oxford University Press. [ Links ]

Received: December 21, 2020; Accepted: February 01, 2021

Correspondence: alexandra.martinez@usc.es (A.Martínez).

Conflict of Interest

The authors of this article declare no conflict of interest.

Creative Commons License This is an Open Access article distributed under the terms of the Creative Commons Attribution-Noncommercial No Derivative License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium provided the original work is properly cited and the work is not changed in any way.