Mi SciELO
Servicios Personalizados
Revista
Articulo
Indicadores
- Citado por SciELO
- Accesos
Links relacionados
- Citado por Google
- Similares en SciELO
- Similares en Google
Compartir
Anales de Psicología
versión On-line ISSN 1695-2294versión impresa ISSN 0212-9728
Anal. Psicol. vol.32 no.3 Murcia oct. 2016
https://dx.doi.org/10.6018/analesps.32.3.259391
Multidimensional Assessment of Giftedness: Criterion Validity of Battery of Intelligence and Creativity Measures in Predicting Arts and Academic Talents
Evaluación Multidimensional de la Superdotación: Criterios de validez de la Batería de Inteligencia y Creatividad para predecir los talentos artísticos y académicos
Tatiana de Cassia Nakano1, Ricardo Primi2, Walquiria de Jesus Ribeiro3 y Leandro S. Almeida4
1 Post Graduate Program in Psychology. Pontifical University Catholic of Campinas (Campinas, São Paulo, Brazil).
2 Post Graduate Program in Psychology. Universidade São Francisco (Itatiba, São Paulo, Brazil).
3 Master in Psychology at Pontifical University Catholic of Campinas (Campinas, São Paulo, Brazil). Federal University of Maranhão. (Brazil).
4 Institute of Education, University of Minho, Braga (Portugal).
This article is part of a research project financed by the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) and Coordenapao de Aperfeipoamento de Pessoal do Ensino Superior (CAPES).
ABSTRACT
We test the utility of the Battery for Giftedness Assessment (BaAH/S) in identifying differences in two groups of already known gifted students in the areas of academic and artistic talents. Four latent factors were assessed (a) fluid intelligence, (b) metaphor production (verbal creativity), (c) figural fluency (figural creativity), and (d) divergent thinking figural task quality (figural creativity). A sample of 987 children and adolescents, 464 boys and 523 girls, of ages ranging from 8 to 17 of two groups: regular students (N=866) and gifted students (N= 67 academic abilities, N=34 artistic abilities and N=20 no domain identified). Academic giftedness group of have higher reasoning, can produce more remote/original metaphors, high figural fluency and drawings rated as more original. Children in the group of artistic giftedness have higher reasoning, high figural fluency and drawings rated as more original. Reasoning abilities are relatively higher in academic giftedness group than artistic (r = .39 vs r =.14). Within artistic group figural fluency and ratings of originality are relatively more important than reasoning (r = .25 and r = .21 vs .14). We emphasizes the importance of assessing creativity in different domains in addition to intelligence to improve the understanding of giftedness and talent.
Key word: Giftedness, Talent, Creativity, Intelligence, Multidimensional assessment, Metaphor Production, Fluid Reasoning.
RESUMEN
Este trabajo tiene por objeto probar la utilidad de la Batería para la Evaluación de la Superdotación (BaSH/S, por sus siglas en portugués) para identificar diferentes grupos de alumnos superdotados en las aéreas de talento académico y artístico. La batería valora cuatro factores latentes: (a) inteligencia fluida, (b) producción de metáforas (creatividad verbal), (c) fluidez figurativa (creatividad figurativa), y (d) calidad del pensamiento divergente figurativo (creatividad figurativa). Se tomó una muestra de 987 alumnos adolescentes, 464 chicos y 523 chicas de edades de 8 a 17 años, que pertenecían a dos grupos: alumnos no superdotados (N=866) y alumnos superdotados (N= 67 habilidades académicas, N=34 habilidades artísticas y N=20 no identificados en un dominio especifico). El grupo de superdotados académicos presento las puntuaciones más altas en razonamiento y podían producir metáforas más originales y remotas, eran figurativamente más fluidos y sus dibujos eran más originales. Las habilidades académicas eran relativamente mayores en los superdotados académicos que en los artísticos (r = .39 vs r =.14). En el grupo de superdotados artísticos la fluidez figurativa y sus puntuaciones en originalidad eran relativamente más importantes que el razonamiento (r = .25 y r = .21 vs .14). El trabajo enfatiza la importancia de evaluar la creatividad en distintos dominios además de la inteligencia para mejorar el entendimiento de la superdotación y el talento.
Palabras clave: Superdotación, Talento, Creatividad, Inteligencia, Evaluación multidimensional, Producción de metáforas, Razonamiento fluido.
Introduction
Recent theories of giftedness adopt a multidimensional approach for defining it including intelligence and creativity, as well as other abilities like leadership characteristics, psychomotor ability, visual, performing and musical arts, and academic and non academic achievement areas (Heller, 2013). Although intelligence is considered the most widely used criterion for gifted identification, this does not mean that only intelligence merits attention from researchers and educators (Besjes-de Bock & Ruyter, 2011; Pfeiffer, 2015; Prieto, Lopez-Martinez, & Ferrandiz, 2003). When intelligence tests are exclusive in screening of gifted students, gifted in other areas of exceptionality will be missed. Moreover it may be a risk of creating a homogeneous group with similar cognitive abilities (Pierson, Kilmer, Rothlisberg, & McIntosh, 2012). As consequence of gifted children identification based on intelligence tests, authors affirm that minority and impoverished students have been underrepresented in gifted and talented programs (Pfeiffer, 2015; Van Tassel-Baska, Feng, & Evans, 2007).
Recent orientation recommends the use of comprehensive assessment instruments in order to capture the broad spectrum of high ability (Calero & García-Martin, 2014; Hernández-Torrano, Férrandiz, Ferrando, Prieto & Férnandez, 2014). Diagnostic evaluation also usually requires protocols that exceed the classic IQ tests approach that include other components or characteristics associated with high capacities (Callahan, 2006; Montero-Linares, Navarro-Guzmán & Aguillar-Villagrán, 2013; Renzulli & Gaesser, 2015; Subotnik, Olszewski-Kubilius & Worrell). Due to this multidimensionality and complexity of giftedness and talent, researchers recommend a wide process of identification, based on all available information sources, using multiple criteria like standardized tests and informal instruments (teacher and parent checklists, questionnaires, school products and portfolios). A comprehensive process is considered the best practice for identifying gifted children (Baer & Kaufman, 2005; Renzulli & Gaesser, 2015; VanTassel-Baska, Feng, & Evans, 2007). The recognition of multiple perspectives and the use of a many sources of information can enlarge the giftedness assessment, reduce the number of false positives and negatives on identification process, and allow the identification of different types of talents. For instance, Sternberg (2010) exemplifies the difference between, in one hand; someone who is analytically gifted (but not gifted in other areas) may do well on standardized tests and activities that requires analytical reasoning. In another hand, someone creatively gifted may come up with many novel, different and original ideas but not necessarily will perform well on standardized tests since they tap more analytical skills than creativity. This fact justifies the importance of a comprehensive evaluation that considers the heterogeneity in the ways gifted talents manifests as well as cultural and linguistic specificities of the population (Almeida, Fleith, & Oliveira, 2013).
Nevertheless, literature review shows that most giftedness identification systems are based only on intelligence measures usually when a student receives a test score of two standard deviation above the mean, although cut-off scores are controversial since usually are determined in an arbitrary manner according to local needs as pointed by Lichtenberger, Volker, Kaufman and Kaufman (2006). Empirical studies shows that gifted sample are generally 1 to 1 1/3 standard deviation above control group test score mean, therefore being lower than expected according to the statistical criterion considered on giftedness identification. So, an important limitation of research in this area it's the lack of consensus in giftedness conceptualization and identification (Dan, Swanson, & Cheng, 2011; Lichtenberger, Volker, Kaufman, & Kaufman, 2006; Roid, 2003; Volker & Phelps, 2004).
In order to contribute for a more comprehensive giftedness identification, we started to develop the Battery for Giftedness Assessment (Batería de Altas Habilidades e Sobredotação - BaAH-S). BaAH-S has two parts, one with measures of intelligence and creativity and a second with teachers rating scale for screening a broad set of domains related to giftedness. BaAH-S assesses reasoning and creativity potential via performance tests, assessing other cognitive and socio-emotional skills like academic achievement, leadership and motivation via teachers report. We divided in two parts to avoid asking complicated abstract concepts to children. In BaAH-S problem solving is assumed isomorphic with fluid reasoning (Gf) defined as "the deliberate but flexible control of attention to solve novel "on the spot" problems that cannot be performed by relying exclusively on previously learned habits, schemas, and scripts" (Schneider & McGrew, 2012). According to Pierson, Kilmer, Rothlisberg and McIntosh (2012) the most test batteries include measures of fluid (gf) or crystalized intelligence (Gc), or a combination of both.
BaAH-S also assesses two different domains of giftedness: academic and productive-creative or artistic (Renzulli, 2004). Academic-related abilities are associated to high levels of school performance, logical and analytical thinking, good memory, great intellectual activity, and ability to processing complex information. These are standard potential attributes assessed in intelligence tests. Productive creative or artistic-related abilities are associated to curiosity, problem solving, creative thinking (such as fluency, flexibility and originality), production of ideas, innovations and artistic products. This second area is usually not well represented in standard intelligence tests used for gifted identification (Virgolim, 1997).
Creativity-related abilities has been emphasized as an important element in the most recent theories of giftedness: Differentiated Model of Giftedness and Talent by Gagné (2005), Three-Ring Conception by Renzulli (2005) and Wisdom, Intelligence and Creativity by Sternberg (2003). They believe that the inclusion of measures of creative potential into gifted programs will benefit students and will add valuable information about individual potential not currently assessed in intelligence tests (Kaufman, Plucker, & Russell, 2012). Dai, Swanson and Cheng (2012) completed a survey of empirical studies published in giftedness during 19982010 and state that creativity was one of the four most researched topics, amongst underachievement, socialemotional skills and alternative ways of identification. Authors point to the fact that intelligence assessment have been extensively tested and validated for the purpose of identification of giftedness while creativity assessment have more limited evidence on reliability and validity.
Any attempt to include creativity assessment for inclusion into gifted programs will face a complex and controversial topic of the lack of consensus about the construct definition and assessment (Beghetto, Plucker, & MaKinster, 2001; Cropley, 2000; Lemons, 2011). Many scholars have questioned if the divergent tests show predictive validity (Baer, 1994; Feist, 2004; Gardner, 1993; Han, 2003; Jarosewich et al., 2002; Kogan & Pankove, 1974; Schraw, 2005). But a lot of progress have been occurring recently such as new assessment methods (Silvia, Winterstein, Willse, et al., 2008), new methods of data analysis (Nakano & Primi, 2014; Primi, 2014; Silvia, 2007, 2011), predictive validity of creativity assessment tools (Kim, 2006; Plucker & Runco, 1998; Runco, Millar, Acar, & Cramond, 2011; Zeng, Proctor, & Salvendy, 2011), the structure and independence of evaluated traits (Chase, 1985; Clapham, 1998; Heausler & Thomson, 1988; Primi, Nakano, Morais, Almeida, & David, 2013; Runco & Mraz, 1992) and the validity of subjective rating (Benedek, Mühlmann, Jauk, & Neubauer, 2013; Chen, Kasof, Himsel, Greenberger, Dong, & Xue, 2002; Kaufman, Baer, Agars, & Loomis, 2010; Kaufman, Lee, Baer, & Lee, 2007; Silvia, 2011; Silvia, Martin, & Nusbaum, 2009; Silvia, Winterstein, Willse, et al., 2008). Also, in order the gifted children identification (children from 9 to 12 years old) an international research project is validating the Aurora Battery (Chart, Grigorenko, & Sternberg, 2008) in different countries, a specific battery considering three kinds of intelligences: analytical, synthetic (creative) and practical.
Some interesting critical reviews and reanalysis help to understand the issue of limited validity evidence of divergent thinking tests for assessing creativity. Plucker and Runco (1998) point to some limiting features of studies such as inadequate statistical procedures in presence of non-normal distributions, short duration of studies and the inadequacy of outcome criteria in longitudinal studies - usually relatively centered in more on quantity than quality. Plucker (1999) reanalyzed the famous longitudinal studies of Torrance (1969, 1972,1987, 2002) and Torrance and Wu (1981) where they applied Torrance Tests of Creativity Thinking (TTCT) and followed 212 elementary school students for a long period of time and recorded creative achievements as outcome measures (Cramond, Matthews-Morgan, Bandalos, & Zuo, 2005; Runco, Millar, Acar, & Cramond, 2010). Plucker used structural equation modeling and showed strong evidence of predictive validity for verbal divergent thinking scores that had a stronger effect than intelligence in predicting creative achievement (this results were found for verbal but not for figural tests). The author points to the fact that outcome measures were biased to verbal domain and that could explain lack of validity of figural tests. Recent studies that used subjective scorings of quality of ideas produced in divergent thinking tasks - one of the old methods used by Guilford (Wilson, Guilford, & Christesen, 1953) - as opposed to quantity alone as measured by fluency scores, show that creativity is more strongly related to intelligence as previously though (Primi, 2015; Silvia, 2011, 2015).
These studies exemplifies that advances in assessment methods and statistical analysis have been showing the robustness of psychometric properties of creativity measures. Nevertheless there are few studies operationalizing measures incorporating these new advances in multidimensional batteries to assess and identify gifted students. Most studies undoubtedly still focus on intelligence only. The ones than combine intelligence and creativity are still scarce. Miller and Cohen (2012) suggest "that conceptions of giftedness and creativity encompass an extremely important aspect of human development: supporting and caring for others" (p.111). The Battery for Giftedness Assessment BaAH-S incorporated intelligence (fluid reasoning) and creativity measures in two domains - verbal and figural - aiming to sample responses from different domains of giftedness expressions. It also incorporates recent methods of subjective rating in addition to traditional measures obtained in divergent thinking tasks. The main goal of this paper is to test the utility of the BaAH-S in identifying children potential abilities. Therefore we intend to test criterion validity of BaAH-S in pinpointing differences in two groups of already known gifted students in the areas of academic and artistic talents. At the same time it aims to discuss the utility of a multidimensional assessment in the process of identifying domain-specific talents. We hope that BaAH-S can help to fill the gap related to the identification of a broad set of skills that are usually missed when using more narrow batteries not considering the multidimensional nature of giftedness and serve as an auxiliary tool for providing high quality services to this population
Method
Participants
The sample was composed of 987 children and adolescents, 464 boys and 523 girls, of ages ranging from 8 to 17 (the majority of the sample, 96%, were between 8 and 15 years old) M = 11.58, SD = 1.89. They were studying in 2 to 12 grade, most from 4-9 (96.6%). There were two main groups in this sample: one of regular students (N = 866), our control group, and a group of students identified as gifted students, our criterion group (N = 120). In the criterion group 67 students were identified in the domain of academic abilities (M = 12.68, SD = 2.67), 34 in the domain of artistic abilities (M = 12.21, SD = 2.58), and 20 students didn't have any specific domain identified at the time of data collection but had passed the tests on identification phase (M = 12, SD = 1.49).
Measures
The Battery for Giftedness Assessment (BaAH/S) is composed by four intelligence subtests (verbal, abstract, numerical and logical reasoning), two subtests of creativity (divergent thinking figural task and metaphors creation test) and a teacher rating scale. Only the objectives subtests were used in this study. The present study will add information about criterion validity of the battery, complementing previous studies about its internal factor structure (Ribeiro, Nakano, & Primi, 2014), item analysis using item response theory (Nakano et al., 2015; Nakano & Primi, 2014), the association between intelligence and creativity using confirmatory factor analysis (Nakano, Wechsler, Campos, & Millian, 2015).
The four reasoning subtests is a subset of items from the Battery of Reasoning Tests (BPR-5, Primi, & Almeida, 2000): Verbal Reasoning (VR): 12 items of verbal analogies, with two pair of words. One of the pairs is incomplete. The subject has to choose, in five alternatives, the word that completes the second pair; Abstract Reasoning (AR): 12 items of geometric analogies tasks, each one contend two pair of figures. One of them is incomplete. Using analogy, the individual have to select, among five options, the correct figure that complete the second pair; Numerical Reasoning (RN): 12 numerical series, where the last two is incomplete. Considering the arithmetic relation between the numbers, the next two digits have to be discovered; and Logical Reasoning (RL): 12 items that present practical and everyday situations as a context for logical premises. Student's needs to use deductive reasoning to relate premises and make conclusions that are asked. The number of correct conclusions is different in each problem, varying between 1 and 4.
The divergent thinking figural task (DTF) is a subtest of the Test of Creativity in Children's Drawings (Nakano, Wechsler, & Primi, 2011). It consists of 10 incompletes stimulus that have to be completed with creating drawings. Eleven creative characteristics are analyzed, grouped into three factors (Ribeiro, Nakano, & Primi, 2012): Elaboration (FG_ELB): scores five attributes of drawings - fantasy, uncommon perspective, internal perspective, context use and elaboration; Emotion (FG_EMO): scores three attributes of drawings - emotion expression, movement and expressiveness of titles; and Cognitive (FG_COG): scores three traditional factors - fluency, flexibility and originality. The ten drawings were also scored by raters in a 5-point scale (from not original/creative to very creative/original). This measures composed a score on divergent thinking figural task quality (DTFq).
Metaphor Creation Test (MCT, Primi, 2014). MCT is a new method for the assessment of creativity using the divergent production of metaphors in tasks such as "The camel is ----- of the desert" or "The grass is the ------ of the land". The test is composed of five items asking for a maximum of four responses to each item. Subjects were instructed to fill in the blank space with a creative metaphor and explain each of their responses. Raters scored each response for quality and flexibility. Quality (quat_tri) was defined using a four-point rating scale (0 not a metaphor to 3 high original and remote association). Flexibility (flx) was scored for each item on a four-point scale ranging from 1 to 4 depending on the number of shifts in response categories. Raters were trained in two sessions to masters the scoring criteria. The subjects' scores were calculated from the Many Facet Rasch Measurement (MFRM) that estimates latent scores of quality of metaphors. Flexibility scores were calculated as an average shifts in categories.
Procedure
We recruited general group of regular students (control group) from public and regular elementary schools conveniently located in four Brazilian cities. Criterion group are students from a city program to the Student with High Skills/Giftedness run by the Secretary of Education at Brasilia Federal District. They were previously identified by the program selection procedures. The program divides students into seven subgroups. Each one attends to thematic classes of arts (two classes), academic (three classes) and mixed (two classes).
The identification process starts with referral from teachers, professionals from school community, and family, fellow student or by self-assessment. Once joining the program, students will undergo an observation phase - that may last from four to sixteen weeks - where program professionals will observe and assess student capabilities with intelligence tests - WISC-III, Raven's Progressive Matrices, Battery for Reasoning Tests (BPR-5) - interest survey questionnaires, creativity exercises and general records containing information and student productions.
In the next stage those students who reach the profile defined by the program, will be transferred to the intervention phase that offers enrichment activities and support to the students and their families (type I, II and III as proposed by Renzulli, 2004). Assessment results will identify students as gifted in academic or artistic area and refer to stimulation and enrichment groups in accord with their skills and interests. Program activities usually ends on the last high school year.
On of the researchers administered the tests during the enrichment classes of academic and arts. Tests were collectively administered in a single occasion in the classroom lasting an average of 90 minutes. Students responded initially to four subtests of reasoning followed by the figural creativity activity and finally the divergent production of metaphors. The objectives of this study and the assessment instruments have been presented to parents and students in order to obtain their informed consent to participate in this research.
Data Analyses
Data analysis focused on examining the relationship of criterion variables and latent factors derived from BaAH-S. There were two criterion variables (observed variables): CR_ACD and CR_ART that were dummy variables (1 if in criterion group and 0 otherwise) representing if student were identified as having academic talents and artistic talents respectively. Validity studies tests weather: "a theoretical attribute has a causal effect on test scores.. .but since many attributes cannot be manipulated.. .validation of tests for these attributes is therefore restricted to correlational studies.that compare tests scores of groups of persons that are assumed to differ in the attributes" (Borsboom & Mellenbergh, 2007, p. 101). So our criterion subsamples were identified using multiple methods and source of information as possessing high level of potential abilities related to academic and artistic domain. Therefore we hypothesize that if constructs measured in BaAH-S are valid - that is, they capture potential attributes in intelligence and creativity that are key constructs that characterize this subsample - they should be associated with these criterion variables. Additionally, since BaAH-S is a multidimensional battery measuring creativity (verbal and figural) and intelligence we intend to explore if its factors have different relationships with these two criterions. This will contribute to understand if different factors are associated more strongly with different domain of talent.
We approach this analysis with a multiple-indicator, multiple-causes (MIMIC) model (MacIntosh & Hashim, 2003). The measurement model (Figure 1) is composed of four latent factors: (a) fluid intelligence (Battery of Reasoning Test - BRT) reflected in four indicators (logical reasoning RL, numerical reasoning RN, abstract reasoning RA, and verbal reasoning RV), (b) metaphor creation test reflected by two indicators (MCT, quality of metaphors and flexibility), (c) divergent thinking figural task reflected in three indicators (DTF, elaboration, emotional features and cognitive variables), and finally (d) divergent thinking figural task quality (DTFQ) reflected by five observed variables (subjective rating of originality of the five first drawing in a scale of 1 no original to 5 very creative/original).
These four latent factors are regressed on two observed indicators as well as gender dummy variable (1 for girls and 0 for men) as a control variable. These variables are represented in the left part of Figure 1. A confirmatory factor analysis with covariates (MIMIC) was estimated by the MPLUS using MLR algorithm of estimation (maximum likelihood) that produce parameter estimates with standard errors and a chi-square test statistic that are robust to nonnormality of observed variables. It also can model variables with non-standard normal distributions like count and ordered categorical like in likert-item types (Muthén & Muthén, 2010). Our main hypothesis is tested with the size and significance of criterion variables associations on latent factors of BaAH-S. In addition of MPLUS we used R-packages, Psych, semPLOT, lavam, and open software JASP for general statistical analysis and figures (Epskamp, 2015; Love et al., 2015; Revelle, 2015; Rosseel, 2012).
Results
Table 1 presents descriptive statistics of the variables and subtests of BaAH-S for the entire sample. In general most variables show distributions close to normal (RV, RA, RN, RL, qual_tri, fg_cog). Two variables from figural fluency task show a moderate positive asymmetry and kurtosis since they have a substantial number of zero counts. Also the scores produced by the subjective ratings of drawings produced in figural fluency (E01, E02, E03, E04 and E05) are slightly positively skewed. When running the SEM analysis we tried to accommodate these departure from normal distributions modeling fg_elb and fg_emo as a count variable and E01-E05 as ordered categorical.
Table 2 shows standardized parameter estimates and inferential statistics for the MIMIC model described in Figure 1. It presents the parameter estimate (Par est), its standard error (se) their ratio (Par/se). A first run specified indicators variables as continuous. The last two columns show the parameters estimates and se for the same model but considering the variables fg_elb and fg_emo as a count variables and E01-E05 as ordered categorical variables. As can be seen there is no marked differences in the parameters coming from the two runs.
The model converged to an identifiable solution. Fit indices of the model were adequate: χ2 = 450.1, df = 101, χ2/df = 4.4, RMSEA = .059, CFI = .93, TLI=.91, SMR =. 04. Values of measurement parameters indicate that all indicators had high loadings in their correspondent latent factor. The bottom part of the Table 1 shows the latent construct correlations. It shows that variables of figural fluency and subjective rating of quality is highly associated (r = .88); it shows also that divergent production of metaphor has a high relationship with fluid reasoning factor (r =.60). All other latent factors had small to moderate associations (.20 to .30). Gender was entered as a control variable so as to disentangle associations of criterion variables with latent constructs that could eventually be due to unbalanced gender distributions across groups. We observed gender differences only in quality of drawings in figural fluency tasks. Girls tend to do slightly well than boys. It was observed that criterion groups tended to have relatively less girls than boys and this relationship is stronger in the academic giftedness group.
Criterion validity information is presented in the middle part of Table 2. All latent variables shows significant correlations with criterion ranging from small to moderate (r=.11 to r= .39). Children in the group of academic giftedness have higher reasoning, can produce more remote/original metaphors, high figural fluency and drawings rated as more original. Children in the group of artistic giftedness have higher reasoning, as well as high figural fluency and drawings rated as more original. Moreover reasoning abilities are relatively higher in academic giftedness group than artistic (r = .39 vs r =.14). Within artistic group figural fluency and ratings of originality are relatively more important than reasoning (r = .25 and r = .21 vs .14).
Figure 2 shows this interaction of creativity/intelligence with area of giftedness. Before preparing this figure all variables were standardized in a z score (M = 0, SD = 1). Figure shows average scores on four scores fluid intelligence (BRT), metaphor creation test (MCT), divergent thinking figural task (DTF) and divergent thinking figural task quality (DTFq) on the three groups (gray scale) control, art and academic. It is clear that academic group is higher in fluid reasoning and metaphor production while artistic group have higher scores on figural fluency tests quality.
Discussion
This paper reports criterion validity of a battery assessing intelligence and creativity for giftedness assessment. The main goal was to test the utility of the BaAH/S in identifying students with high abilities. We found positive associations with criterion in all measures of intelligence and figural and verbal creativity measures. As expected, intelligence (fluid reasoning) predicts both types of giftedness with a significant association, but results show that creativity is also associated to giftedness. It provides evidence that adding different abilities in the assessment process can improve the accuracy of the giftedness identification (Calero & García-Martín, 2014; Gallagher, 2008, Hernández-Torrano, Ferrándiz, Ferrando, Prieto, & Fernández, 2014; Renzulli & Gaesser, 2015). Literature recognizes the multidimensional nature of giftedness (Feldman, 2000; Heller, 2013; Li et al., 2009; Kaufman & Sternberg, 2008; Jarosewich, Pfeiffer, & Morris, 2002; Robinson & Clinkenbeard, 2008) although it is not common to find evidence of criterion validity of comprehensive measures in gifted samples (Bracken & Brown, 2006, Baer & Kaufman, 2005; Kaufman, Plucker, & Russell, 2012; Kerr & Sodano, 2003, Hazin et al., 2009, Sternberg, 2010). So, BaAH/S validity results assure that the addition of creativity in different domains can provide complementary information in identifying talents.
A second objective was to explore if assessing different domains adds additional information in predicting areas of giftedness. Results showed, as expected, that academic gifted students presented higher scores in intelligence measures, in all types of reasoning evaluated (verbal, abstract, logical and numeric) as well as metaphor production. Also artistic had higher scores in figural divergent thinking tasks. We could observe an interaction between the creativity domains by type of giftedness. This result emphasizes the value of assessing multiple attributes in understanding gifted individuals (Renzulli, 2004; Sternberg, 1981). It also shows the importance of divergent thinking measures in figural domain. Pluker (1999) discuss that the lack validity of figural tests may be related to the nature of criterion measures that overly sample characteristics of verbal-academic domain. In this study there is a close approximation of the outcome measure (arts) and the abilities assessed in figural tests. This may have facilitated to find associations between the two.
An unexpected finding is that artistic group didn't show higher scores in metaphor task but academic group showed better performance in this task. Metaphors task is defined as "an instrument for evaluation of cognitive components of creativity" (Primi, Miguel, Couto, & Muniz, 2007, p.198). Recent studies shows a stronger role of intelligence in creative thinking than previously thought especially implicating executive functions, working memory and fluid intelligence in the production of creative metaphors (Benedeck et al., 2013; Chiappe & Chiappe, 2007; Kazmerski, Blasko, & Dessalegn, 2003; Primi, 2014; Silvia & Beaty, 2012). For example, Beaty and Silvia shows that crystallized knowledge could only predict individuals' ability to generate conventional metaphors (r = .30), but fluid intelligence predicts creative metaphor production (r = .45) and is not associated with conventional metaphors production. We replicated this finding observing a strong association of divergent production of metaphors with fluid reasoning (r =.60, similar to what was found in Primi, 2014). David, Morais, Primi and Miguel (2014) found that scores on Metaphor Creation Test has a strong association with grades in high school Portuguese students. Therefore metaphor-intelligence associations may reflect common mechanism of fluid reasoning and production of creative and abstract metaphors that are higher in academic gifted samples but not so in artistic talent.
In conclusion, multidimensional assessment instruments like BaAH/S can be useful detecting different profiles that identify domain specific talents. Its use implies changes in identification process considering a brother set of attributes including potential related creativity in arts in addition to already important aspect related to academic abilities.
References
1. Baer, J. (1994). Divergent thinking is not a general trait: A multi-domain training experiment. Creativity Research Journal, 7, 35-46. http://dx.doi.org/10.1080/10400419409534507. [ Links ]
2. Baer, J., & Kaufman, J. C. (2005). Bridging generality and specificity: The Amusement Park Theoretical (APT) Model of Creativity. Roeper Review, 27, 158-163. http://dx.doi.org/10.1080/02783190509554310. [ Links ]
3. Barros, D.P., Primi, R., Miguel, F.K., Almeida, L., & Oliveira, E.P. (2010). Metaphor creation: a measure of creativity or intelligence? European Journal of Education and Psychology, 3(1), 103-115. [ Links ]
4. Beghetto, R.A., Plucker, J.A., & MaKinster, J.G. (2001). Who studies creativity and how do we know? Creativity Research Journal, 13(3/4), 351-357. http://dx.doi.org/10.1207/S15326934CRJ1334_12. [ Links ]
5. Benedek, M., Mühlmann, C., Jauk, E., & Neubauer, A. C. (2013). Assessment of divergent thinking by means of the subjective top-scoring method: Effects of the number of top-ideas and time-on-task on reliability and validity. Psychology of Aesthetics, Creativity, and the Arts, 7(4), 341. http://dx.doi.org/10.1037/a0033644. [ Links ]
6. Besjes-de Bock, K.M., & Ruyter, D.J. (2011). Five values of giftedness. Roeper Renew, 33, 198-207. DOI: 10.1080/02783193.2011.580502. [ Links ]
7. Borsboom, D., & Mellenbergh, G. J. (2007). Test validity in cognitive assessment. In Leighton, J. P. & Gierl, M. J. (Eds.), Cognitive diagnostic assessment for education: Theory and applications (pp. 85-115). New York: Cambridge University Press. [ Links ]
8. Bracken, B. A., & Brown, E. F. (2006). Behavioral identification and assessment of gifted and talented students. Journal of Psychological Assessment, 24(2), 112-122. doi: 10.1177/0734282905285246. [ Links ]
9. Calero, M. D., & García-Martin, M. B. (2014). Estabilidad temporal del C.I. y potencial de aprendizaje en niños superdotados: implicaciones diagnósticas (Temporal stability of C.I. and learning potential of gifted children: diagnostic implications). Anales de Psicología, 39(2), 512-521. http://dx.doi.org/10.6018/analesps.30.2.16380. [ Links ]
10. Callahan, C. M. (2006). Developing a plan for evaluating a program in gifted education. In J. H. Purcell & R. D. Eckert (Eds.), Designing services and programs for high ability learners: A guidebook for gifted education (pp. 195-206). Thousand Oaks, CA: Corwin. http://dx.doi.org/10.4135/9781483329307.n15. [ Links ]
11. Chart, H., Grigorenko, E. L, & Sternberg, R. J. (2008). Identification: The Aurora Battery. In J. A. Plucker & C. M. Callahan (Eds), Critical issues and practices in gifted education (pp. 345-365). Waco, TX: Prufrock Press. [ Links ]
12. Chase, C. I. (1985). Review of the Torrance Tests of Creative Thinking. In J.V. Mitchell Jr. (Org.). The ninth mental measurements yearbook (pp.1631-1632). Lincoln: University of Nebraska Press. [ Links ]
13. Chen, C., Kasof, J., Himsel, A.J., Greenberger, E., Dong, Q., & Xue, G. (2002). Creativity in drawings of geometric shapes: a cross-cultural examination with the consensual assessment technique. Journal of Cross-Cultural Psychology, 33(2), 171-187. http://dx.doi.org/10.1177/0022022102033002004. [ Links ]
14. Chiappe, D. L., & Chiappe, P. (2007). The role of working memory in metaphor production and comprehension. Journal of Memory and Language, 56, 172-188. [ Links ]
15. Clapham, M. M. (1998). Structure of figural forms A and B of the Torrance Tests of Creative Thinking. Educational and Psychological Measurement, 58, 275-283. http://dx.doi.org/10.1177/0013164498058002010. [ Links ]
16. Cramond, B., Matthews-Morgan, J., Bandalos, D., & Zuo, L. (2005). A report on the 40-year follow-up of the Torrance Tests of Creative Thinking: Alive and well in the new millennium. Gifted Child Quarterly, 49, 283-291. http://dx.doi.org/10.1177/001698620504900402. [ Links ]
17. Cropley, A. (2000). Defining and measuring creativity: Are creativity tests worth using? Roeper Review, 23, 72-79. http://dx.doi.org/10.1080/02783190009554069. [ Links ]
18. Dai, D. Y., Swanson, J.A., & Cheng, H. (2011). State of research on giftedness and gifted education: a survey of empirical studies published during 1998-2010 (April). Gifted Child Quaterly, 55 (2), 126-138. http://dx.doi.org/10.1177/0016986210397831. [ Links ]
19. David, A. P., Morais, M. D. F., Primi, R., & Miguel, F. K. (2014). Metáforas e pensamento divergente: criatividade, escolaridade e desempenho em Artes e Tecnologias. Avaliaçãdo Psicológica, 13(2), 147-156. [ Links ]
20. Epskamp, S. (2015): semPlot: Unified Visualizations of Structural Equation Models, Structural Equation Modeling: A Multidisciplinary Journal, DOI: 10.1080/10705511.2014.937847. [ Links ]
21. Feist, G. (2004). The evolved fluid specificity of human creative talent. In R. Sternberg, E. Grigorenko, & J. Singer (Eds.), Creativity from potential to realization (pp. 57-82). Washington, DC: American Psychological Association. http://dx.doi.org/10.1037/10692-005. [ Links ]
22. Gagné, F. (2005). From gifts to talents: The DMGT as a developmental model. In R. J. Sternberg & J. E. Davidson (Eds.), Conceptions of giftedness (2nd ed., pp. 98-120), New York, NY: Cambridge University Press. http://dx.doi.org/10.1017/CBO9780511610455.008. [ Links ]
23. Gallagher, J. J. (2008). Psychology, psychologist, and gifted students. In S. Pfeiffer (Org.). Handbook of giftedness in children: Psycho-educational theory, research and best practices (pp. 1-11). New York: Springer. [ Links ]
24. Gardner, H. (1993). Creating minds. New York, NY: Basic Books. [ Links ]
25. Han, K. (2003). Domain specificity of creativity in young children: How quantitative and qualitative data support it. Journal of Creative Behavior, 37, 117-142. http://dx.doi.org/10.1002/j.2162-6057.2003.tb00829.x. [ Links ]
26. Heausler, N. L., & Thompson, B. (1988). Structure of the Torrance Tests of Creative Thinking. Educational and Psychological Measurement, 48, 463-468. http://dx.doi.org/10.1177/0013164488482021. [ Links ]
27. Heller, K. A. (2013). Findings form Munich Longitudinal Study of Giftedness and their impact on identification, education and counseling. Talent Development & Excellence, 5(1), 51-64. [ Links ]
28. Hérnandez-Torrano, D., Férrandiz, C., Ferrando, M., Prieto, L., & Férnandez, M. C. (2014). The theory of multiple intelligences in the identification of high abilities students. Anales de Psicología, 30(1), 192-200. [ Links ]
29. Jarosewich, T., Pfeiffer, S., & Morris, J. (2002). Identifying gifted students using teacher rating scales: A review of existing instruments. Journal of Psychoeducational Assessment, 20, 322-336. http://dx.doi.org/10.1177/073428290202000401. [ Links ]
30. Kaufman, J. C., Baer, J., Agars, M. D., & Loomis, D. (2010). Creativity stereotypes and the consensual assessment technique. Creativity Research Journal, 22(2), 200-205. http://dx.doi.org/10.1080/10400419.2010.481529. [ Links ]
31. Kaufman, J. C., Lee, J., Baer, J., & Lee, S. (2007). Captions, consistency, creativity and the consensual assessment technique: new evidence of reliability. Thinking Skills and Creativity, 2, 96-106. http://dx.doi.org/10.1016/j.tsc.2007.04.002. [ Links ]
32. Kaufman, J. C., Plucker, J. A., & Russell, C. M. (2012). Identifying and assessment creativity as a component of giftedness. Journal of Psychoeducational Assessment, 30, 60-73. http://dx.doi.org/10.1177/0734282911428196. [ Links ]
33. Kaufman, S. B., & Sternberg, R. J. (2008). Conceptions of giftedness. In S. Pfeiffer (Ed.), Handbook of giftedness in children: Psycho-Educational theory, research and best practices (pp. 71-91). New York: Springer. [ Links ]
34. Kazmerski, V., Blasko, D., & Dessalegn, B. (2003). ERP and behavioral evidence of individual differences in metaphor comprehension. Memory & Cognition, 31, 673-689. [ Links ]
35. Kerr, B., & Sodano, S. (2003) Career assessment with intellectually gifted students. Journal of Career Assessment, 11, 168-186. [ Links ]
36. Kim, K. H. (2006). Can we trust creativity tests? A review of the Torrance tests of creative thinking (TTCT). Creativity Research Journal, 18, 3-14. http://dx.doi.org/10.1207/s15326934crj1801_2. [ Links ]
37. Kogan, N., & Pankove, E. (1974). Long-term predictive validity of divergent-thinking tests: Some negative evidence. Journal of Educational Psychology, 66, 802-809. http://dx.doi.org/10.1037/h0021521. [ Links ]
38. Lemons, G. (2011). Diverse perspectives of creativity testing: controversial issues when used for inclusion into gifted programs. Journal for the Education of the Gifted, 34(5), 742-772. http://dx.doi.org/10.1177/0162353211417221. [ Links ]
39. Li, H., Lee, D., Pfeiffer, S. I., Kamata, A., Kumtepe, A. T., & Rosado, J. (2009) Measurement invariance of the Gifted Rating Scales - school form across five cultural groups. School Psychology Quarterly, 4 (3), 186-198. [ Links ]
40. Lichtenberger, E. O., Volker, M. A., Kaufman, A. S., & Kaufman, N. L. (2006). Assessing gifted children with the Kaufman Assessment Battery for Children - second edition (KABC-II). Gifted Education International, 21, 99-126. http://dx.doi.org/10.1177/026142940602100304. [ Links ]
41. Love, J., Selker, R., Marsman, M., Jamil, T., Dropmann, D., Verhagen, A. J., Ly, A., Gronau, Q. F., Smira, M., Epskamp, S., Matzke, D., Wild, A., Knight, P., Rouder, J. N., Morey, R. D., & Wagenmakers, E.-J. (2015). JASP (Version 0.7.1)(Computer software). [ Links ]
42. Lubisnki, D., Schmidt, D. B., & Benbow, C. P. (1996). A 20-year stability analysis of the study of values for intellectually gifted individuals from adolescence to adulthood. Journal of Applied Psychology, 81(4), 443-451. [ Links ]
43. Macintosh, R., & Hashim, S. (2003). Variance estimation for converting MIMIC model parameters to IRT parameters in DIF analysis. Applied Psychological Measurement, 27(5), 372-379. [ Links ]
44. Miller, E. M., & Cohen, L. M. (2012). Engendering talent in others: expanding domains of giftedness and creativity. Roeper Review, 34, 104-113. http://dx.doi.org/10.1080/02783193.2012.660684. [ Links ]
45. Montero-Linares, J., Navarro-Guzmán, J. I., & Aguilar-Villagrán, M. (2013). Procesos de automatización cognitiva en alumnado con altas capacidades intelectuales (Cognitive processes automation in highly gifted students). Anales de Psicología, 29(2), 454-461. http://dx.doi.org/10.6018/analesps.29.2.123291. [ Links ]
46. Muthén, L. K., & Muthén, B. O. (2010). Mplus User's Guide. Sixth Edition. Los Angeles, CA: Muthén & Muthén. [ Links ]
47. Nakano, T. C., & Primi, R. (2014). Rasch-Master's Partial Credit Model in the Assessment of Children's Creativity in Drawings. The Spanish Journal of Psychology, 17, 1-16. http://dx.doi.org/10.1017/sjp.2014.36. [ Links ]
48. Nakano, T. C., Primi, R., Abreu, I. C. C., Gozzoli, M. Z., Caporossi, D. C., Miliani, A. F. M., & Martins, A. A. (2015). Bateria para avaliação das altas habilidades/superdotação: análise dos itens via Teoria de Resposta ao Item (Battery for Assessment of giftedness: analysis conducted using Item Response Theory). Estudos de Psicología (Campinas), 32(4), 725-737. http://dx.doi.org/10.1590/0103-166X2015000400016. [ Links ]
49. Nakano, T. C., Wechsler, S. M., Campos, C. R., & Milian, Q. G. (2015). Intelligence and creativity: relationships and their implications for Positive Psychology. Psico-USF, 20(2), 195-206. http://dx.doi.org/10.1590/1413-82712015200201. [ Links ]
50. Nakano, T. C., Wechsler, S. M., & Primi, R. (2011). Teste de Criatividade Figural Infantil (Test of the Children's Figural Creativity). São Paulo: Editora Vetor. [ Links ]
51. Pfeiffer, S. I. (2015). El modelo tripartido sobre la alta capacidad y las mejores prácticas en la evaluación de los más capaces. Revista de Educación, 368, 66-95. [ Links ]
52. Pierson, E. E., Kilmer, L. M., Rothlisberg, B. A., & McIntosh, D. E. (2012). Use of brief intelligence tests in the identification of giftedness. Journal of Psychoeducational Assessement, 30(1), 10-24. http://dx.doi.org/10.1177/0734282911428193. [ Links ]
53. Plucker, J. A. (1999). Is the proof in the pudding? Reanalysis of Torrance's (1958 to present) longitudinal data. Creativity Research Journal, 12, 103-114. http://dx.doi.org/10.1207/s15326934crj1202_3. [ Links ]
54. Plucker, J. A., & Runco, M. A. (1998). The death of creativity measurement has been greatly exaggerated: current issues, recent advances, and future directions in creativity assessment. Roeper Review, 21(1), 36-39. http://dx.doi.org/10.1080/02783199809553924. [ Links ]
55. Primi, R. (2014). Divergent Productions of Metaphors: Combining Many-Facet Rasch Measurement and Cognitive Psychology in the Assessment of Creativity. Psychology of Aesthetics, Creativity, and the Arts (online publication). http://dx.doi.org/10.1037/a0038055. [ Links ]
56. Primi, R., & Almeida, L. S. (2000). Baterías de Provas de Raciocínio (BPR-5): Manual técnico (Battery for Reasoning Tests: technical manual). São Paulo: Casa do Psicólogo. [ Links ]
57. Primi, R., Miguel, F.K., Couto, G. & Muniz, M. (2007). Precisão de avaliadores na avaliação da criatividade por meio da produção de metáforas (Inter rater reliability in the creativity assessment using metaphor production). PsicoUSF, 12 (2), 197-210. [ Links ]
58. Primi, R., Nakano, T.C., Morais, M.F., Almeida, L.S. & David, A.P.M. (2013). Factorial Structure analysis of the Torrance test in Portuguese students. Estudos de Psicologia (Campinas), 30(1), 19-28. http://dx.doi.org/10.1590/S0103-166X2013000100003. [ Links ]
59. Renzulli, J. S. (2005). The three-ring definition of giftedness: A developmental model for promoting creative productivity. In R. J. Sternberg & J. E. Davidson (Eds.), Conceptions of giftedness (2nd ed., pp. 246-280). New York, NY: Cambridge University Press. http://dx.doi.org/10.1017/CBO9780511610455.015. [ Links ]
60. Renzulli, J. S., & Gaesser, A. H. (2015). Un sistema multicriterial para la identificación del alumnado de alto rendimiento y de alta capacidad creativo-productiva. Revista de Educación, 368, 96-131. [ Links ]
61. Revelle, W. (2015) Psych: Procedures for Personality and Psychological Research, Northwestern University, Evanston, Illinois, USA, http://CRAN.R-project.org/package=psych Version = 1.5.8. [ Links ]
62. Ribeiro, W. J., Nakano, T. C., & Primi, R. (2014). Validade da estrutura fatorial de uma Bateria de Avaliação de Altas Habilidades (Validity of the Factor Structure of a Battery of Assessment of High Abilities). Psico, 45(1), 100-109. http://dx.doi.org/10.15448/1980-8623.2014.L13636. [ Links ]
63. Robinson, A., & Clinkenbeard, P. R. (2008). History of giftedness: perspectives from the past presage modern scholarship. In S. Pfeiffer (Org.). Handbook of giftedness in children: Psycho-Educational theory, research and best practices (pp. 13-31). New York: Springer. [ Links ]
64. Roid, G. H. (2003). Stanford-Binet Intelligence Scales, Fifth Edition. Itasca, IL: Riverside Publishing. [ Links ]
65. Rosseel, Y. (2012). lavaan: An R Package for Structural Equation Modeling. Journal of Statistical Software, 48(2), 1-36. URL http://www.jstatsoft.org/v48/i02/. [ Links ]
66. Runco, M. A., Millar, G., Acar, S., & Crammond, B. (2011). Torrance tests of creative thinking as a predictor of personal and public achievement: a fifty-year follow-up. Creativity Research Journal, 22(4), 361-368. [ Links ]
67. Runco, M. A., & Mraz, W. (1992). Scoring divergent thinking tests using total ideational output and a creativity index. Educational and Psychological Measurement, 52, 213-221. http://dx.doi.org/10.1177/001316449205200126. [ Links ]
68. Schneider, W. J., & McGrew, K. (2012). The Cattell-Horn-Carroll model of intelligence. In, D. Flanagan & P. Harrison (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (3rd ed.) (pp. 99-144). New York: Guilford. [ Links ]
69. Schraw, G. (2005). Review of the Khatena-Torrance Creative Perception Inventory. In R. Spies & B. Plake (Eds.), The sixteenth mental measurements yearbook (pp. 542-543). Lincoln: University of Nebraska Press. http://dx.doi.org/10.1177/001316449205200126. [ Links ]
70. Silvia, P. J. (2007). An introduction to multilevel modeling for research on the psychology of art and creativity. Empirical Studies of the Arts, 25(1), 1/20. http://dx.doi.org/10.2190/6780-361T-3J83-04L1. [ Links ]
71. Silvia, P. J. (2011). Subjective scoring of divergent thinking: examining the reliability of unusual uses, instances and consequences tasks. Thinking Skills and Creativity, 6, 24-30. http://dx.doi.org/10.1016/j.tsc.2010.06.001. [ Links ]
72. Silvia, P. J. (2015). Intelligence and creativity are pretty similar after all. Educational Psychological Review (online publication). http://dx.doi.org/10.1007/s10648-015-9299-1. [ Links ]
73. Silvia, P. J., & Beaty, R. E. (2012). Making creative metaphors: the importance of fluid intelligence for creative thought. Intelligence, 40, 343-351. [ Links ]
74. Silvia, P. J., Martin, C., & Nusbaum, E. C. (2009). A snapshot of creativity: evaluating a quick and simple method for assessing divergent thinking. Thinking Skills and Creativity, 4, 79-85. http://dx.doi.org/10.1016/j.tsc.2009.06.005. [ Links ]
75. Silvia, P. J., Winterstein, B. P., & Willse, J. T. (2008). The madness to our method: some thoughts on divergent thinking. Psychology of Aesthetics, Creativity and the Arts, 2(2), 109-114. http://dx.doi.org/10.1037/1931-3896.2.2.109. [ Links ]
76. Sternberg, R. J. (1981). A componential theory of intellectual giftedness. Gifted Child Quarterly, 25, 86-93. [ Links ]
77. Sternberg, R. J. (2003). WICS: Wisdom, Intelligence, and Creativity, Synthesized. Cambridge, UK: Cambridge University Press. http://dx.doi.org/10.1017/CBO9780511509612. [ Links ]
78. Sternberg, R. J. (2010). Assessment of gifted students for identification purposes: new techniques for a new millennium. Learning and Individual Differences, 20, 327-336. doi: 10.1016/j.lindif.2009.08.003. [ Links ]
79. Subotnik, R. F., Olszeski-Kubilius, P., & Worrell, F. C. (2011). Rethinking giftedness and gifted education: a proposed direction forward based on Psychological Science. Psychological Science in the Public Interest, 72(1), 3-54. http://dx.doi.org/10.1177/1529100611418056. [ Links ]
80. Torrance, E. P. (1969). Curiosity of gifted children and performances on timed and untimed tests of creativity. Gifted Child Quarterly, 13, 155-158. [ Links ]
81. Torrance, E. P. (1972). Predictive validity of the Torrance tests of creative thinking. Journal of Creative Behaviour, 6, 236-252. http://dx.doi.org/10.1002/j.2162-6057.1972.tb00936.x. [ Links ]
82. Torrance, E. P. (1987). Future career image as a predictor of creative achievement in the 22-year longitudinal study. Psychology Reports, 60, 574. http://dx.doi.org/10.2466/pr0.1987.60.2.574. [ Links ]
83. Torrance, E. P. (2002). The manifest: A guide to developing a creative career. Westport, CT: Ablex. [ Links ]
84. Torrance, E. P., & Wu, T. H. (1981). A comparative longitudinal study of adult creative achievements of elementary school children identified as high intelligence and highly creative. Creativity Child Adult Quarterly, 6, 71-76. [ Links ]
85. Van Tassel-Baska, J., Feng, A. X., & Evans, B. L. (2007). Patterns of identification and performance among gifted students identified through performance tasks. Gifted Child Quarterly, 51(3), 218-231. Doi: 10.1177/0016986207302717. [ Links ]
86. Virgolim, A. M. R. (1997). O individuo superdotado: historia, concepção e identificação (The gifted Individual: history, conception and identification). Psicologia Teoria e Pesquisa, 13(1), 173-183. [ Links ]
87. Volker, M. A., & Phelps, L. (2004). Identification of gifted students with the WISC-IV. In D. P. Flanagan & A. S. Kaufman (Eds.), Essentials of WISC-IV assessment (pp. 216-224). Hoboken, NJ: Wiley. [ Links ]
88. Wilson, R. C., Guilford, J. P., & Christesen, P. R. (1953). The measurement of individual differences in originality. Psychological Bulletin, 50(5), 362-370. doi: 10.1037/h0060857. [ Links ]
89. Zeng, L., Proctor, R. W., & Salvendy, G. (2011). Can traditional divergent thinking tests be trusted in measuring and predicting real-world creativity? Creativity Research Journal, 23(1), 24-37. http://dx.doi.org/10.1080/10400419.2011.545713. [ Links ]
Correspondence:
Ricardo Primi,
Universidade São Francisco,
Laboratório de Avaliação Psicológica e Educacional
[Laboratory of Psychological and Educational Assessment],
Rua Alexandre Rodrigues Barbosa, 45, CEP 13251-900,
Itatiba, São Paulo (Brazil).
E-mail: rprimi@mac.com
Article received: 25-05-2015
Revised: 19-02-2016
Accepted: 04-03-2016