Introduction
The impact on the emotional, linguistic, academic, and social development of a child born with hearing loss is notable, particularly in cases of severe and profound losses. However, nowadays, these difficulties can be reduced through early diagnosis and the early initiation of rehabilitation (Martínez et al., 2021; Trinidad et al., 2010).
Approximately 10 years ago, the Commission for Early Detection of Hearing Impairment estimated that every year in Spain, around 2,500 new families had a child with hearing deficit, of which 500 had profound deafness (Núñez-Batalla et al., 2016). The incidence figures of childhood deafness in Spain have improved since the implementation of early detection programs. Until that time, data came from diagnoses made during school age or from specific studies of specific pathologies, placing the incidence at 1 in 1000 births. Nowadays, the figures range from 0.8 to 3.3 per 1000 births (Xunta de Galicia 2023).
It is necessary to highlight that 40% of hearing losses present comorbidity with other pathologies. The most frequent are cognitive difficulties (present in 8% of cases), language development disorders (8% of cases), and autism spectrum disorder (7%). The remaining 17% of cases are associated with a wide variety of disorders such as cerebral palsy, visual difficulties, and attention deficit hyperactivity disorder (Núñez-Batalla et al., 2021). This greatly complicates the establishment of diagnosis, as well as subsequent treatment, slowing down the process and causing difficulties to accumulate, including proper language development (Núñez-Batalla et al., 2023). Therefore, an early assessment of potential repercussions at all levels of development must be carried out.
As mentioned above, universal screening programs along with technological advancements and specialization in intervention have succeeded in reducing difficulties associated with prelingual hearing loss. Nowadays, it is known that intervention in the first six months significantly improves these difficulties, including language development. For instance, some studies indicate that early interventions significantly enhance vocabulary levels (de Diego-Lázaro et al., 2019). However, despite these advances, differences in language development remain considerable when compared to hearing peers.
In the lexical-semantic area, individuals with prelingual deafness face challenges in learning novel words, as this skill seems to develop more slowly than in hearing children (Quittner et al., 2016). Kallioinen et al. (2023) consider it important to strengthen semantic skills such as vocabulary and conceptual knowledge to enhance cortical semantic processing in children with cochlear implants.
In the morphosyntactic area, research suggests that deaf implanted children use fewer coreferential elements, a lower variety of cohesive elements, and less discursive updating (Fresneda & Madrid, 2017). However, some studies comparing children with cochlear implants to hearing children educated in the same school found no differences in their morphosyntax, showing age-appropriate linguistic abilities in both cases (Falcón-González et al., 2019, Le Normand & Thai-Van, 2023).
At a pragmatic level, differences between children with hearing loss and hearing children are also noticeable. Boons et al. (2013) found that the narrative skills of children with cochlear implants were adequate in terms of the quantity and coherence of the narrated story; however, they scored lower than their hearing peers in terms of the quality, content, and effectiveness of the stories. They studied a group of children implanted before the age of two, with two cochlear implants and a single spoken language, reaching values within the average for their age. On the other hand, Walker et al. (2017) conducted a longitudinal study with children between 5 and 6 years old with severe and profound hearing loss regarding the understanding of false belief tasks. The results showed that deaf children had a delay in understanding first-order false belief tasks compared to their hearing peers. Specifically, 84% of hearing children understood and performed false belief tasks, compared to 41% of deaf children.
Additionally, differences seem to exist regarding symbolic play. Bofarull & Fernández (2012), when comparing symbolic play between deaf and hearing children, concluded that language acquisition is fundamental for tasks such as object substitution, a crucial aspect for the onset of symbolic play. In this regard, Quittner et al. (2016) pointed out that deaf children experience a delay in the acquisition of symbolic play, with a more pronounced delay when the implantation occurs after the age of two. Other skills such as empathy and prosocial motivation are lower in deaf children, regardless of the degree of hearing loss or the type of hearing aid. Considering the communication system used, children with a system geared toward oral language were more empathetic and prosocial than those using sign language, although not at the level of their hearing peers (Netten et al., 2015).
Regarding reading, Kyle et al. (2016) consider that vocabulary level is a predictor of reading ability in deaf children. More recently, Paniagua-Martín et al. (2022) claim that both the breadth and depth of vocabulary are related to reading difficulties in deaf children. On the other hand, the results of several studies on the relationship between phonology and reading ability in deaf children show that deaf children can access phonology through visual channels like cued speech, without the need for auditory input (Alasim & Alqraini, 2020; Kronenberger et al., 2020). This is crucial when designing interventions aimed at promoting reading skills in children with hearing difficulties.
In terms of executive functioning, it is important to consider the relationship between verbal abilities and this aspect of cognitive processing. Kronenberger et al. (2013) suggest that difficulties in directing and controlling thoughts and behaviour may contribute to inhibitory control issues in deaf children, indicating potential cognitive effects beyond those associated solely with auditory impairment.
Botting et al. (2017) propose an intertwined relationship between expressive vocabulary and non-verbal executive function, suggesting that language skills play a mediating role in executive performance. Therefore, they consider language to be crucial for adequate executive performance and not the other way around. Thus, Hall et al. (2017) found that deaf children from deaf families, with a history of auditory but not linguistic deprivation, did not have behavioural problems related to executive function.
Some studies such as Figueras et al. (2008) highlight the interconnection between the development of executive functions and language skills in deaf children. For this reason, they underline the need for clinical and educational approaches that not only address language development, but also consider executive functions. This may lead to improved social adjustment, academic success and general well-being in deaf children. The clinical implications would range from early intervention to the design of educational programs and the training of professionals, emphasising a comprehensive and multidisciplinary approach.
However, Kotowicz et al. (2023) in a similar approach as Hall et al. (2017) present contrasting findings, indicating no significant differences in executive functioning between native signing deaf children and their hearing peers. They attribute this to the supportive environment provided by signing deaf parents, suggesting that various factors beyond language and cognitive processes, including social and familial support, contribute to the development of executive functions.
In summary, research on the relationship between executive function and hearing loss in children reveals a complex interplay between language development, cognitive skills, and the social environment. While some studies suggest that executive function difficulties may be related to deficits in language and hearing, others highlight the importance of factors such as family support and linguistic environment in the development of these skills. These findings underscore the need to consider multiple variables when assessing and addressing the needs of children with hearing loss, acknowledging the influence of both intrinsic and extrinsic factors on their cognitive and social development.
The interventions carried out to promote communication and language development in children with hearing loss are diverse. A simple and clarifying classification is presented by Gravel & O'Gara (2003), who categorize them into 3 groups: oral therapies, therapies that employ visual and manual cues, and those that use sign language. Within oral therapies are auditory-verbal therapy (AVT) and auditory-oral therapy (AOT). Both are quite similar, but in the former, language is solely acquired through hearing, whereas in AOT, lip reading, facial expression, and natural gestures are also used to support language development.
On the other hand, there are several therapies that use visual and manual cues. These include cued speech, using manual cues simultaneously with oral language; manually coded English, using oral language and signs simultaneously; total communication, promoting the use of multiple modalities like manually coded systems, gestures, lip reading, and auditory input; and Simultaneous-Communication (Sim-Com), similar to total communication but not requiring auditory input.
Lastly, there are therapies that utilize sign language as a communication option. This would include the Bilingual-Bicultural (Bi-Bi) approach, which uses sign language as the primary language and teaches the second language through a combination of manually coded systems and cued speech.
Most studies evaluating the effectiveness of interventions in deaf children focus on assessing specific types of intervention rather than comparing interventions. For instance, Yoshinaga-Itano (2010) indicated that deaf children who underwent cochlear implants between 12 and 24 months and received a combination of AOT and sign language could attain age-appropriate levels in expressive lexicon and receptive syntax. Bayard et al. (2019) highlighted the benefits of cued speech therapy for proper language acquisition and development. Tejeda-Franco et al. (2020) pointed out that AVT was an effective technique not only for language development but also for improving speech acoustics in children with hearing difficulties.
However, few studies compare the efficacy of different interventions. Geers et al. (2011) indicated that children with cochlear implants who received oralist intervention performed better in tests evaluating oral language development compared to those using sign language. Marshall et al. (2018), comparing interventions combining sign language and oral language against solely oralist therapy, found similar semantic fluency difficulties in both cases. Recently, Van Bogaert et al. (2023) compared AVT with cued speech therapy. They concluded that deaf children not using cued speech or other consistent visual support had lower lexical judgment skills (detecting phonological distortions) than their peers. Moreover, they indicated that cochlear implants alone were insufficient to develop good speech perception skills.
This study aims to examine linguistic and neuropsychological differences in a sample of deaf children compared to their hearing peers. It will also seek to highlight potential neuropsychological and linguistic differences among children with hearing loss based on the type of intervention received for language and communication development. Specifically, three communicative modalities were compared: oral language without the use of augmentative or alternative systems, sign language, and a total communication system through bimodal/cued speech. The hypotheses are, on the one hand, that the expressive and comprehension language skills of deaf children will be lower than those of hearing children. In addition, cognitive skills (working memory, visual attention, cognitive flexibility, inhibitory control and reasoning ability) measured through the CELF5 subscales will be lower in deaf children compared to their hearing peers. Finally, it is expected that deaf children who use the total communication modality will present better language and cognitive skills than those who use oral language or sign language.
Method
A cross-sectional study was conducted collecting data on linguistic and neuropsychological abilities in a group of deaf children compared to their hearing peers. The analysed variables included: working memory and auditory attention, inhibitory control, cognitive flexibility, and reasoning ability. Linguistically, data was gathered on overall language performance, receptive and expressive language levels, as well as linguistic content.
Participants
The total sample for this study consisted of 124 participants, of which 64 had hearing loss and 60 were hearing individuals. The mean age in the hearing loss group was 8.52 years, SD = 1.94, and 8.65, SD = 1.85 years in the hearing group (range 5-11 years). In relation to gender, the hearing loss group comprised 34 males and 30 females, while the hearing group had 31 male and 29 female participants.
The hearing group was recruited from a public school in A Coruña, while the deaf/hard of hearing children were recruited from the Otorhinolaryngology Departments of the University Hospitals of A Coruña, Vigo, Lugo, and Santiago de Compostela, the Galician Federation of the Deaf, and a Specialized Educational Center for the Deaf in the Community of Madrid. Inclusion criteria for both groups involved being between 5 and 11 years old and not having a diagnosis of a neurodevelopmental disorder. All children with hearing loss were orally proficient. Those who used sign language were bilingual: sign language, oral language.
Concerning the type of prosthetics used, the most common were either hearing aids or cochlear implants combined with hearing aids, as shown in Table 1. The mean age at which the prosthetics were fitted was 24.56 months. 45.31% of the participants had profound bilateral sensorineural deafness, 26.56% had severe sensorineural deafness, while the remaining 28.13% exhibited other types of hearing loss with varying degrees of severity (see Table 2).
Participants with hearing loss used 3 communicative modalities: oral language without the use of augmentative or alternative systems (38 participants), sign language (11 participants, of which only 2 were native signers) and a total communication system using bimodal/cued speech (15 participants). Bimodal communication involves the simultaneous use of oral language and gestures (Monfort et al., 1982) and the cued speech consists of the use of visual cues in order to favour, mainly, the acquisition and development of phonology.
Finally, it should be noted that the average age of diagnosis of deafness was 13.47 months for the entire group. Considering the communicative modality, children using sign language received the diagnosis later, with an average of 31.91 months, followed by those using oral language with an average of 16 months, and those using total communication with an average of 11.33 months.
Instruments
To collect sociodemographic data, a questionnaire was developed to gather information on gender, age, place of origin, and spoken languages. In the case of the group with hearing loss, additional questions were included regarding the age of diagnosis of deafness, type of deafness, use of prosthetics, age of placement, and communication modality employed.
The instrument used to collect information about linguistic abilities was the Clinical Evaluation of Language Fundamentals-5 (CELF5) (Wiig et al., 2017). It is a standardized test that identifies, diagnoses, and monitors language and communication disorders in children and adolescents aged 5 to 15 years. It combines different tests in three age ranges: 5 to 8 years, 9 to 12 years, and 13 to 15 years. It consists of 14 subtests that assess various language competencies, including morphosyntax, semantics, pragmatics, and the ability to remember and retrieve oral language. These subtests are classified based on age to obtain a Core language score and four indices: Receptive language index, Expressive language index, Linguistic content index, and Linguistic structure index (see Table 3).
In this study 4 subtests were used (Word classes, Following directions, Formulation of sentences, and Recalling sentences) and calculated the Core language score and 3 indices (Receptive language index, Expressive language index, and Linguistic content index) as they were common to the age groups in the sample (between 5 and 11 years old)
The Following directions subtest assesses understanding of directions. The subject must point to the picture in the stimulus booklet that corresponds to the oral instruction. The difficulty and length increase as the application progresses. It contains 33 items, scored as 1 for correct and 0 for incorrect responses. The Word classes subtest assesses abilities to understand relationships between words based on semantic fields and specific semantic relations. The subject must choose between three or four words that are read to him/her, the two that are related. For children from 5 to 8 years old, picture support is used for the first 12 items. In this case, there are 40 items scored as 1 or 0 based on correct or incorrect responses. The Formulation of sentences subtest evaluates the ability to orally construct complete, semantically and grammatically correct sentences of increasing length and complexity. After observing an image, the subject must elaborate a sentence containing the word or words read to him/her. Items are scored from 0 to 2 points, with 48 being the maximum score. Lastly, the Recalling sentences subtest assesses the subject's ability to listen to orally presented sentences of increasing length and complexity and repeat them without changing the meaning, content, or structure of words or phrases. Items are scored from 0 to 3 points, with 78 being the maximum score.
With regard to the Core language score, it is obtained by adding the scaled scores of four subtests to assess the language competence of the evaluated child. While subtests may vary depending on the subject's age, those included are the most discriminative and clinically sensitive in identifying potential language disorders. The score range is from 40 to 160. The receptive language index measures listening and auditory comprehension abilities, obtained from the scaled scores of two subscales focused on receptive language (Word classes and Following directions). On the other hand, the expressive language index is derived from two subscales focused on expressive language (Formulated sentences and Recalling sentences). Finally, the linguistic content index evaluates various aspects of semantic and lexical development. It is derived from two tests measuring semantic and lexical aspects (Word classes and Following directions). The tests included in each index vary depending on the children's age. The score range for the indices varies from 45 to 145.
Although the CELF-5 is not specifically designed to assess executive functioning, some of the subtests in the battery can be used as part of a more general neuropsychological assessment (Pearson Clinical, s.f.). Specifically, it assesses working memory, auditory attention, inhibitory control, cognitive flexibility, and reasoning ability. As mentioned above, in this study the assessment was conducted using the subtests of the CELF5 that are common for ages 5 to 11. Table 4 shows the distribution of the 4 subtests applied in this study based on the cognitive skill evaluated.
Regarding the internal consistency of the subtests comprising the CELF5, it should be noted that it is high, as the Cronbach's α values for the Spanish standardization sample in each of the subtests range between .79 and .97
Procedure
The study was conducted following approval from the Ethics Committee of the Galician Health Service (SERGAS) in the A Coruña-Ferrol health area (file number: 2019/475). Access to electronic medical records (IANUS program) of children with hearing loss, aged between 5 and 11 years, attending otorhinolaryngology services at Galician university hospitals, was granted with authorization from the SERGAS Ethics Committee. Subsequently, families were contacted and appointments were scheduled at their respective hospitals. During these appointments, the study's objectives were explained, and voluntary participation was requested through the signing of informed consent forms. Sociodemographic data were collected and the CELF5 assessment was administered in a quiet room.
For participants associated with the Galician Federation of the Deaf, a similar procedure was followed. Initial contact was made with the institution's management, and upon authorization, the institution's speech therapist identified eligible children and informed their families about the study. Interested families provided signed informed consent forms and were scheduled for testing.
The sample of children with deafness was augmented by contacting a Specialized Educational Center for the Deaf in the Community of Madrid. Following authorization from the center's management, the guidance department communicated with families to explain the study's purpose and objectives. Informed consent forms were signed, and tests were conducted in a classroom within the educational center, minimizing disruption to ongoing classes.
For hearing children, contact was made with the management of an Early Childhood and Primary Education Center in the province of A Coruña. After explaining the study's purpose and obtaining authorization, information and consent forms were sent to families of students aged between 5 and 11 years. Upon receiving voluntary consent, a six-week data collection period was established, and the CELF5 assessment was administered in a designated classroom. Efforts were made to minimize disruption to the school's routine and individual child activities.
In both groups, tests were administered individually, with sessions lasting between 45 and 60 minutes. The instructions for each test were delivered in sign language for children who use it as their primary communication method. Although the evaluator was proficient in sign language, she received guidance and support from a Spanish sign language interpreter who was well-versed in the test procedures. For children using oral communication aids (bimodal/cued speech), minimal assistance was needed as their responses were prompt and appropriate. If any difficulty arose during a subtest, bimodal support was provided. Finally, for the group of hearing-impaired children who communicated orally without aids, the instructions were given verbally.
Analysis of data
The obtained data were analysed using the statistical program SPSS version 28. Initially, a descriptive analysis was conducted based on the scaled scores obtained by each of the groups in each of the four subtests, the Core language score, and the 3 indices of the CELF5.
Differences between both groups regarding specific linguistic domains were examined through the Core language score, Receptive language index, Expressive language index, and Linguistic content index.
Comparisons between the scores obtained by both groups were approached from a neuropsychological perspective by classifying the CELF5 subtests based on the involved cognitive functions. Specifically, differences were analysed concerning working memory and auditory attention, inhibitory control, cognitive flexibility, and reasoning ability.
The Levene's test for homogeneity of variances confirmed that the variances of the scores obtained by the hearing group and the group with hearing loss in the CELF5 subtests were not homogeneous. Therefore, to compare if there were statistically significant differences in the performance of both groups regarding the analysed variables, a non-parametric test, specifically the Mann-Whitney U test, was applied.
Finally, applying the Kruskal-Wallis test revealed differences in the group with hearing loss concerning the evaluated linguistic abilities based on the communication modality used by the participants (oral language, sign language, or bimodal/cued speech).
Results
Comparison of deaf group with hearing peers based on language skills
First, a descriptive analysis was conducted comparing the mean scores obtained by both groups in each of the 4 subtests, the Core language score and the 3 indices (see Tables 5). It also includes the percentile corresponding to performance.
Table 5. Mean scaled scores and percentiles obtained by the deaf group (DG) and hearing group (HG) on the CELF5.
For 4 subtests, performance based on the scaled score is considered as follows: a scaled score of 6 or lower (SD < -1) would indicate below-average performance; a scaled score of 7 (SD = -1) would indicate borderline performance, from 8 to 12 (SD between +1 and -1) within average performance; and 13 or higher (SD ≥ +1) above average performance.
As seen in Table 5, the deaf group achieves average performance in 3 of the subtests (Following directions, Word classes, Formulation of sentences), and below-average performance in the Recalling sentences subtest.
In relation to the Core language score and the 3 indices, average scores of 100 with an SD = 15 would be considered average, between 85 and 100 below average, and 115 or higher above average. The deaf group achieves scores around the average for the Core language score and the 3 indices, although slightly below.
As shown in Table 5, the hearing group achieves average performance in 3 of the subtests (Following directions, Word classes, Recalling sentences), and above-average performance in the Formulated sentences subtest. They also perform around average on the Core language score and the 3 indices, with higher performance on the expressive language index, which places them in the 80th percentile.
Therefore, although the performance of the deaf group is adequate in most subtests, the scores obtained by the hearing individuals are higher. Moreover, the discrepancies between the minimum and maximum scores are larger in the group with hearing difficulties, causing greater deviations from the mean. These differences can be visually observed in Figures 1 and 2.

Figure 1. Comparison of performance in the 4 subtests of CELF5 between the deaf group and the hearing group.

Figure 2. Comparison of performance in the main language score and 3 indices of CELF5 between the deaf group and the hearing group.
Second, a non-parametric analysis was performed using the Mann-Whitney U test to test for possible differences in language proficiency between the two groups. The results are presented in Table 6. As can be seen, the differences between the two groups are statistically significant both in the Core language score and in the indices of Expressive, Receptive and Content language. The group of hearing children obtained the highest scores. As shown in Figure 3, the differences are more pronounced for the Core language score and the Expressive language index.
Table 6. Results of the Mann-Whitney U test in the tests related to specific linguistic domains and executive functioning.

*p ≤ .05;
**p ≤ .01

Figure 3. Comparison of the average ranges obtained by each group in the subtests related to specific linguistic domains.
The analysis also showed that there were statistically significant differences between the two groups in the 4 subtests used. The hearing group scored the highest. In Figure 4, the average ranges obtained by each group in each of the subtests are compared. In all of them, the hearing group achieved higher scores compared to the group with hearing loss, with the difference being more pronounced in the recalling and formulation of sentences.
Comparison of deaf group with hearing peers based on cognitive skills
As mentioned above, by categorising the CELF5 subtests according to the cognitive function involved, comparisons were also made between the hearing group and the group with hearing loss. Thus, working memory and auditory attention were analysed through the subtests of Following directions, Word Classes, and Recalling sentences as they involve memorization and attention to details. As indicated in the previous paragraph, the results showed statistically significant differences in all subtests in favor of the group without hearing difficulties (see Table 6). Therefore, in the tasks presented in the 3 subtests, deaf children would have more difficulty than their hearing peers in remembering information that contains varying degrees of specificity or detail.
Cognitive flexibility was assessed through the subtests Following directions, Word classes, and Recalling sentences. Specifically, children with hearing loss showed more difficulties to switch from one instruction to another based on the demands of each subtest or if they persist in performing the tasks. Additionally, the Recalling sentences subtest enables the analysis of inhibitory control, depending on whether participants display impulsivity in responding and modify sentences during repetitions. As mentioned above, this is the test where deaf children's scores differ most from those of hearing children, with the latter scoring higher.
Finally, the results of the Formulated Sentences and Word Classes subtests were used to assess reasoning ability, specifically observing whether children are capable of making inferences and predictions or extracting the common concept among several terms. Again, the scores of the children with hearing loss were lower.
Comparison of the linguistic and cognitive skills of deaf children on the basis of the communicative modality
Given the small size of the groups, the Kruskall-Wallis test was used to check whether the communicative modality used by the participants in the hearing impaired group (oral language, sign language or bimodal/cued speech) could influence the results obtained in the CELF5.
About linguistic performance, results show statistically significant differences between the 3 communicative modalities in the 4 indices (Core language score, Receptive language, Expressive language and Linguistic content index), as can be seen in Table 7. Figure 5 shows the average ranges in each index, considering the communicative modality. It can be seen that the participants who used the total communication system obtained the highest scores, followed by those who used oral language and those who used sign language. The largest discrepancies between the average ranges occurred in the receptive language index and the linguistic content index.
Table 7. Results of the Kruskal-Wallis's test in the tests related to linguistic performance and executive functioning based on the communicative modality.

*p ≤ .05;
**p ≤ .01

Figure 5. Comparison of the average ranges obtained by the hearing loss group in the subtests related to linguistic performance based on the communication modality.
Table 7 also shows how statistically significant differences emerged according to the communicative modality used in the 4 subtests of the CELF5, which assess not only language skills but also executive functioning. As in the previous case, the highest scores in the 4 sub-tests were obtained by the deaf children who used the total communication system.
Figure 6 displays the average ranges in each of the subtests considering the communication modality used. As can be seen, largest discrepancies in average ranges occurred in the Following Instructions subtest, followed by Word Classes. The children who use the total communication modality achieve followed by those using oral language and those who used sign language. The highest scores were obtained in Following Directions and Word classes, both related to working memory, attention, and cognitive flexibility. As mentioned earlier, performance in Word classes is also linked to reasoning ability. It can also be observed that sign language and oral language children scored very similarly on the related words test. This test relates to working memory, cognitive flexibility and reasoning, as mentioned above. In the remaining subtests, children who use sign language tend to be the least likely to score.
Discussion
The results obtained in this study show that although deaf children present linguistic and executive functioning skills around the average, the differences with their hearing peers are noticeable.
In relation to the results obtained in the main language score and the three indices analysed (Receptive, Expressive, and Linguistic content), deaf children are performing below the average level, showing statistically significant differences from their hearing peers. Currently, some studies don't find such differences, although comparisons are based on auditory age rather than chronological age (Falcón-González et al., 2019). More recently, Socher & Ingo (2023), analysing syntactic and grammatical complexity, found no differences between deaf and hearing individuals, although the sample used was small.
In the Recalling sentences test, deaf children exhibit the lowest performance. As previously described in the CELF5 (Wiig et al., 2017), this test involves most of the executive functions required to complete the tasks proposed by the test, including working memory, auditory attention, cognitive flexibility, and inhibitory control.
In the subtests of Word classes and Following directions, also linked to working memory and auditory attention, scores are lower in deaf children. This might be due to their reduced auditory attention during the tests, resulting in increased effort and subsequently greater fatigue in processing information and hearing ('listening to hear'). Working memory may be constrained when processing data with high linguistic content, complex structures, unfamiliar words, etc. These findings align with those of Ishida & Chung (2022), who concluded that differences in verbal working memory in deaf children are attributable to differences in language processing, the phonological loop, and underlying linguistic ability.
Statistically significant differences were also found in cognitive flexibility and inhibitory control between deaf children and their hearing peers. Regarding cognitive flexibility, differences could arise from a tendency to persevere on particular items in the presented tasks. Reduced inhibitory control might be related to the observed inclination during assessment sessions to complete tasks rapidly, repeating phrases before the examiner finished producing them. Merchán et al. (2022) analysed inhibitory control differences between deaf and hearing children, specifically, the ability to suppress interference. The results indicated that in deaf children, receptive vocabulary level correlated negatively with interference from distractors. Thus, this supports the thesis of a relationship between language level and executive function in deaf children.
As for their reasoning ability, differences between both groups could be explained by their limited auditory experiences, making it challenging for them to make inferences. Consequently, reasoning based on their experiences is constrained. Additionally, as pointed out by González-Cuenca et al. (2022), it's not that deaf children learn less vocabulary; it's different (semantic categories included).
Overall, from a neuropsychological interpretation of the results, it could be considered that hearing loss affects both linguistic abilities and executive functions. It seems to have a comprehensive influence on cognitive processes (Kronenberger et al., 2014). In summary, as mentioned above, either auditory deprivation or the associated language deprivation could account for the differences observed when comparing the executive functioning of deaf children with their hearing peers. In this regard, Goodwin et al. (2022) found that the age of language exposure predicted overall executive functioning, as well as performance in working memory, planning and organisation. Therefore, Guerrero-Arenas et al (2023) suggest that both neural reorganisation and cognitive development may be compromised if the social conditions for deaf children to learn to speak are not favourable.
Among the group of deaf children divided according to their communication system, the results reflected that those children using augmentative communication systems like bimodal and cued speech achieved better outcomes in both linguistic and executive functioning. These results align with findings by Botting et al. (2017) and Kronenberger et al. (2014), who consider the relationship between language and executive function as interdependent, where language mediates executive function performance. However, other authors like Bavelier et al. (2006), Koo et al. (2008), and more recently McFayden et al. (2023), analysing short-term memory abilities, support the idea that lexical elements processed in the visuospatial modality aren't retained as well as information processed in the auditory channel. Therefore, it would be interesting to assess working memory tasks with visuospatial and non-auditory tests.
In this study, the results obtained in the main language score and the three linguistic indices assessed through CELF5 reveal that deaf children using total communication systems obtained the highest scores, followed by those using oral language. The group of deaf children using sign language achieved the lowest scores. These findings do not align with results from some studies where no differences were found in language levels among children with implants before the age of 5 using oral language versus those using total communication (McDonald et al., 2000). Even when comparing VAT, oral language, and total communication, there are studies that find VAT to be the most effective system regarding language development levels. Authors justified these results because the age of implantation was earlier in this group, which also had a higher socioeconomic status (Thomas & Zwolan, 2019). In the case of the participants with hearing loss in our study, the placement of the implant or hearing aid was done early in all cases, regardless of the communication system used. While it's true that the sign language group had a later implantation (around 31 months), the total communication group had an early placement of the prosthesis (around 11 months) and the children with oral language, the average age of placement was 16 months.
The data obtained support that the stimulation of oral language in deaf children should begin from the use of communication systems with a grammatical and pragmatic structure similar to that of oral language, as it would facilitate the transition from one to the other. The chosen system should be implemented early, at the time of suspicion of hearing loss, even before placing the prosthesis.
Regarding executive functioning, the results also showed statistically significant differences among the three communication modalities. The children using the total communication system obtained higher scores. Therefore, deaf children who used the total communication modality performed better in tasks involving memorization and auditory attention, had greater impulse control, were capable of switching tasks without perseverating, and had a better understanding of the meaning of specific situations when constructing sentences.
On the other hand, results on the related words subtest showed that children using sign language and oral language performed similarly, much lower than children using total communication systems. This suggests that they have more difficulty with memorisation and repetition, as well as with switching between semantic categories and extracting a common concept from multiple terms and accessing categorical thinking.
The results obtained in relation to executive functions, considering the communicative modality, could also be related to auditory and linguistic deprivation. As just mentioned, in the case of children using sign language, the age of diagnosis was around 31 months, while for those using oral language (16 months) and total communication (11 months), it was diagnosed earlier. Therefore, children using sign language (except for 2 who were native signers) experienced considerable auditory and linguistic deprivation. Additionally, according to Corina and Singleton (2009), the joint visual attention skills necessary to foster socialization processes, as well as inhibitory and attentional control, appear between 12 and 18 months. Thus, in the case of participants using sign language, these factors could justify some of the deficiencies at the executive level.
Lastly, it's important to note that the main limitations of the study are related to the sample size of deaf children, especially those using sign language as a communication system. It is hoped that in the future, the group sizes will be balanced based on the communication modality, considering the duration of exposure, the age of onset, and the involvement of the family.
In the future, it would be useful to determine whether the deficit in executive functioning in deaf children who use sign language is due to linguistic deprivation in addition to auditory deprivation. To do this, a comparison could be made between those who use sign language as their first language and those who acquire it as a second language.
For language assessment, a standardized test was used, which may be less flexible in assessment compared to if it had been complemented with spontaneous speech samples. In Spanish, standardised tests for the assessment of language and cognitive skills in deaf children are scarce. In particular, the Award Neuropsychological Battery (Daza González et al., 2011) addresses this gap. This battery includes instructions in both oral and signed language to assess receptive vocabulary, selective attention, visuospatial abilities, visual memory, abstract reasoning, sequential processing and praxis.
In a recent paper on the assessment of language development during the first six years in prelingually deaf children, Lara Barba et al. (2023) highlight the lack of consensus on the optimal assessment method based on the communicative modality used. They conclude that more research is needed on the use of standardized instruments for language assessment in children who use sign language. Currently, there are few tests adapted to the various communication systems. In Spanish, the adaptation of the Communicative Development Inventory to Spanish sign language (Rodríguez-Ortiz et al., 2020) is noteworthy, as it provides normative data for the deaf population.
In summary, studies aimed at improving and adapting assessment tests for the deaf population are essential, considering the heterogeneity related to the communication methods used.
However, it is important to note that in a systematic review by Vázquez Mosquera (2021), the CELF-5 was one of the most reliable tests for the assessment of language skills in children with hearing impairment. It was considered reliable and easy to use and was used in 37.09% of the assessments. Ideally, there would be specific tests for the deaf population that assess different language dimensions.
Conclusions
In short, the scores of children with hearing loss in linguistic and executive function tasks were lower than those of their hearing peers when using a standardized assessment.
The early placement of prostheses, along with the use of a communication system with characteristics and structure similar to oral language, could benefit the acquisition and development of language.
Although research data suggest that total communication systems and oral therapies are the most effective, the choice appears to depend on factors related to the developmental context of the child with hearing loss (socioeconomic status, parents' educational level, communication system used by parents, family support). Therefore, we believe that the interventions should be tailored to the specific needs and supports of each particular case.










texto en 








