SciELO - Scientific Electronic Library Online

 
vol.32 issue1Today's work experience: precursors of both how I feel and how I think about my job?Counterproductive work behavior among frontline government employees: role of personality, emotional intelligence, affectivity, emotional labor, and emotional exhaustion author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

  • On index processCited by Google
  • Have no similar articlesSimilars in SciELO
  • On index processSimilars in Google

Share


Revista de Psicología del Trabajo y de las Organizaciones

On-line version ISSN 2174-0534Print version ISSN 1576-5962

Abstract

SALGADO, Jesús F.; MOSCOSO, Silvia  and  ANDERSON, Neil. Corrections for criterion reliability in validity generalization: the consistency of Hermes, the utility of Midas. Rev. psicol. trab. organ. [online]. 2016, vol.32, n.1, pp.17-23. ISSN 2174-0534.  https://dx.doi.org/10.1016/j.rpto.2015.12.001.

There is criticism in the literature about the use of interrater coefficients to correct for criterion reliability in validity generalization (VG) studies and disputing whether .52 is an accurate and non-dubious estimate of interrater reliability of overall job performance (OJP) ratings. We present a second-order meta-analysis of three independent meta-analytic studies of the interrater reliability of job performance ratings and make a number of comments and reflections on LeBreton et al.'s paper. The results of our meta-analysis indicate that the interrater reliability for a single rater is .52 (k = 66, N = 18,582, SD = .105). Our main conclusions are: (a) the value of .52 is an accurate estimate of the interrater reliability of overall job performance for a single rater; (b) it is not reasonable to conclude that past VG studies that used .52 as the criterion reliability value have a less than secure statistical foundation; (c) based on interrater reliability, test-retest reliability, and coefficient alpha, supervisor ratings are a useful and appropriate measure of job performance and can be confidently used as a criterion; (d) validity correction for criterion unreliability has been unanimously recommended by "classical" psychometricians and I/O psychologists as the proper way to estimate predictor validity, and is still recommended at present; (e) the substantive contribution of VG procedures to inform HRM practices in organizations should not be lost in these technical points of debate.

Keywords : Interrater; Reliability; Validity generalization; Job performance; Ratings.

        · abstract in Spanish     · text in English     · English ( pdf )

 

Creative Commons License All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License