In a knowledge-based society like ours, reading is vitally important in the learning process because it involves skills such as communicating, sharing, and using information to solve complex problems (Binkley et al., 2012). Reading literacy is also an indispensable requirement for students' acquisition of other basic skills including mathematics and science (Akbaşlı et al., 2016), as well as active participation in adult life (Cunningham & Stanovich, 1997; Economic Co-operation and Development [OECD], 2013; Smith et al., 2000).
Previous studies have examined the influence of various factors in the process of acquiring reading literacy. On the one hand, they found that sociodemographic characteristics, such as gender, family socio-economic and socio-cultural background, and being an immigrant were the main predictors of reading achievement. In most countries, girls, students from better-off socio-economic backgrounds, and native students had systematically better results in reading (OECD, 2016, 2019a).
On the other hand, there does seem to be a certain consensus about the positive influence of family participation in early infancy on the development of reading, with starting reading early, the frequency of shared reading with parents, and the level of exposure to early reading tasks marking the difference (Anderson et al., 2010; Bus et al., 1995; Gjems, 2010; Levy, 2018; Mol et al., 2008; Perregaard, 2010). Other studies have highlighted the importance of parents' roles, showing that parents' enthusiasm for reading improved their children's reading (Clavel & Mediavilla, 2019), and that simply seeing parents reading improved children's reading habits (Clark & Hawkins, 2010; Love & Hamston, 2004; Mullan, 2010). Students' confidence in their reading abilities, as well as having been exposed to early reading tasks were also associated with a higher probability of being resilient in reading (García-Crespo et al., 2019, 2022).
Understanding the factors that promote the acquisition and development of reading may encourage the creation and implementation of educational policies aimed at boosting children's and young people's reading skills. However, many of the factors that predict reading achievement, such as socio-demographic characteristics, beginning reading early, and parental reading habits, are outside school control, limiting the possibilities of establishing transversal measures. This makes it more important to have studies which provide information about factors within the educational system, such as teaching practices. Although many studies have highlighted the key role of teachers in reducing the impact of those factors outside the education system (Le Donné et al., 2016; Hattie, 2009), there is currently no data about universally effective teaching practices (Echazarra et al., 2016; OECD, 2005).
Studies such as the OECD's Program for International Student Assessment (PISA) have attempted to address this issue by collecting detailed information about teaching practices and the strategies students use to tackle school tasks. PISA's objective is to analyze students´ levels of acquisition of basic competencies at the end of compulsory education internationally. To that end, every three years there is an assessment cycle focusing on three knowledge areas; reading, mathematics, and science. Each edition also includes an additional area of innovation. For example in the 2018 PISA study, the additional area was global competence. The PISA study is organized so that in each cycle, one of the main areas is examined in more detail and with more precision. PISA 2018 included a more extensive assessment of reading literacy, which also included the collection of international indicators about various variables associated with it. The PISA theoretical framework defines reading literacy as “understanding, using, evaluating, reflecting on, and engaging with texts in order to achieve one's goals, to develop one's knowledge and potential, and to participate in society” (OECD, 2019b).
Some authors have analyzed the predictor variables of reading literacy in the academic context using the data available from PISA 2018. Koyuncu and Fırat (2021) found that strategies of summarizing and evaluating credibility were common predictors in the three countries they used in their study: Turkey, Mexico, and China. Karaman (2022) looked at Turkish students, confirming previous findings and adding that students who used a greater number of metacognitive strategies demonstrated better reading performance. That author also identified enjoyment of science as a predictor of reading literacy, along with teaching practices such as directed instruction and adaptive instruction. Rojas-Torres et al. (2021) found a positive association in Costa Rican students between time spent reading, interest in reading, and reading performance.
The results seem to indicate a consistent influence from some variables. However, these studies used data from specific countries and applied statistical techniques based on regression analysis. This means that their results cannot be extrapolated to draw conclusions about the other countries in the assessment.
Various studies have called for research which would provide transcultural evidence about factors associated with educational achievement which would allow universally effective strategies to be identified to then establish a common foundation of educational policies aimed at encouraging student performance, and subsequent competitiveness in a globalized world. Other studies call for research that would offer rigorous, systematic comparisons between regions and countries with different cultural and educational traditions in order to establish policies tailored to local contexts (Boonk et al., 2018; Fernández-Alonso et al., 2022; Kim, 2020). The present study aims to contribute to this trend for transcultural research, making use of the synergy between the two approaches.
To that end, the study followed a methodological approach aimed at identifying the important variables in the development of reading literacy at both the international level and at individual country level from comparing situations where the variables were present and those where they were not. More specifically, the study used an adaptation of the Difference in Differences model (DiD) to make inferences about the relationship between the level of acquisition of reading literacy and academic context variables such as student knowledge of reading strategies, teaching practices, and reading frequency. DiD is an econometric method allowing evaluation of a program's impact through comparing observations of control and treatment groups at two timepoints-before and after the program being evaluated is implemented (Becchetti et al., 2013). In an educational setting, the DiD strategy consists of comparing the difference between people's results before and after participating in a program with results obtained at similar times from people in other schools who did not participate in the program (control group) (Schlotter et al., 2011).
Because the PISA study only has longitudinal data for some of the countries involved in the evaluation, the strategy was adapted based on the proposals from Jürges et al. (2005), who used DiD to identify the causal effect of standardized exams on student performance in German students with data from the Trends in International Mathematics and Science Study (TIMSS). More specifically, in our study, we compared observations of the same individuals' performance at a single time point in two different subjects, reading and mathematics.
This strategy, as proposed by Jürges et al. (2005), has been used by other studies (Clavel & Mediavilla, 2019; Cordero & Pedraja, 2018), although it has also been modified in studies using similar models based on fixed student observations in order to estimate the impact of teaching practices or characteristics on student performance (Bietenbeck, 2014; Schwerdt & Amelie, 2011) and the influence of teaching time on academic performance (Rivkin & Schiman, 2015). These studies, among others, have demonstrated the usefulness of the modified DiD strategy for establishing causal relationships in transversal studies where dependent variables are related to student performance.
The current study aims to contribute to the debate about effective teaching capable of promoting reading literacy in students at the international level. More specifically, the study's objective is to determine what factors-related to potential teaching tools in the classroom-are associated with higher levels of reading literacy in all of the OECD countries evaluated in the PISA 2018 study.
Method
Participants
The study used the data for all of the participating OECD countries in the 2018 edition of the PISA study. Table 1 shows the numbers of students in each country along with the total population they represent.
Country | Sample N | % girls | Population represented |
---|---|---|---|
Australia | 14273 | 50% | 257779 |
Austria | 6802 | 49% | 75077 |
Belgium | 8475 | 50% | 118025 |
Canada | 22653 | 50% | 335197 |
Switzerland | 5822 | 48% | 71683 |
Chile | 7621 | 50% | 213832 |
Colombia | 7522 | 51% | 529976 |
Czech Republic | 7019 | 50% | 87808 |
Germany | 5451 | 46% | 734915 |
Denmark | 7657 | 50% | 59967 |
Spain | 35943 | 50% | 416703 |
Estonia | 5316 | 50% | 11415 |
Finland | 5649 | 49% | 56172 |
France | 6308 | 49% | 756477 |
United Kingdom | 13818 | 51% | 597240 |
Greece | 6403 | 50% | 95370 |
Hungary | 5132 | 51% | 86754 |
Ireland | 5577 | 50% | 59639 |
Iceland | 3296 | 50% | 3878 |
Israel | 6623 | 54% | 110645 |
Italy | 11785 | 48% | 521223 |
Japan | 6109 | 51% | 1078921 |
Korea | 6650 | 48% | 455544 |
Lithuania | 6885 | 49% | 24453 |
Luxemburg | 5230 | 50% | 5478 |
Latvia | 5303 | 51% | 15932 |
Mexico | 7299 | 52% | 1480904 |
Netherlands | 4765 | 49% | 190281 |
Norway | 5813 | 50% | 55566 |
New Zealand | 6173 | 51% | 53000 |
Poland | 5625 | 51% | 318724 |
Portugal | 5932 | 50% | 98628 |
Slovak Republic | 5965 | 50% | 44418 |
Slovenia | 6401 | 47% | 17138 |
Sweden | 5504 | 50% | 93129 |
Turkey | 6890 | 49% | 884971 |
United States | 4838 | 49% | 3559045 |
Total | 294527 | 50% | 13575905 |
Instruments
In PISA 2018, the participating students completed a cognitive test and a context questionnaire. The cognitive performance scale assessed competencies which included students' ability to extrapolate from what they had learned and apply their knowledge and skills to real-life situations, as well as their capacity to analyze, reason, and effectively communicate their findings when they addressed, interpreted, and solved problems in various situations. The full scale includes questions related to reading, mathematics, science, and global competence, and would take 13 hours to complete. From that scale, various combinations of questions were used to make up tests which would last approximately 2 hours. Because this edition of PISA looked at reading in more detail, there were more questions for assessing this dimension than the others. The full scale has 245 items in total for evaluating reading literacy, which translates to about six hours of assessment (for more details on the design, see the PISA 2018 Assessment and Analytical Framework; OECD, 2019b). The tests were taken in an electronic format, with the reading test in this case being adaptive; the difficulty of the items in the test was tailored to the students' abilities based on their prior responses, using a multi-stage adaptive design (Yamamoto et al., 2019).
The context questionnaire collected students' demographic data and information about non-cognitive variables. Some of those, such as gender or repeating a school year, were used as simple indexes. Others contributed to the construction of more complex indices which sought to assess latent constructs through observable variables. One example is enjoyment of reading, a construct that is measured through five observable variables.
In addition to these two instruments, PISA 2018 included a school questionnaire to be completed by school authorities, a questionnaire for teachers, and one for students' families. The latter two were optional. Because only a few countries applied these two optional questionnaires, data collected by these instruments was not included in the study.
The explanatory variables in the study were classed as control variables and treatment variables. Control variables were determined by the student's own characteristics or their individual contexts; i.e., those which were not malleable at the school level. These variables were included in the model to control possible biases due to student characteristics. The control variables in the current study were students' gender and their economic, social, and cultural status (ESCS). ESCS is constructed from three components: parents' occupational status, parents' educational attainment (in both cases selecting the highest value from parents), and home possessions (including the number of books in the home). Since PISA began in the year 2000, there has been evidence of a strong relationship between ESCS and student performance (Raitano & Vona, 2016), which has become the focus of numerous studies.
The treatment variables were those related to the promotion of reading literacy that teachers might influence. They were spread over three blocks of constructs related to styles of teaching and learning reading, labelled: reading strategies and enjoyment of reading, teaching practices in language lessons, and reading frequency.
The PISA 2018 questionnaire covers two evaluation scenarios of students' reading strategies, both related to metacognition: a) summarizing, and b) understanding and remembering. Students were asked to score the reading strategies in relation to how useful they were for tackling a reading task, an assessment that was also done in parallel by a group of experts through multiple pairwise comparison. This assessment produced a hierarchy of all the strategies for each task, ranked from most to least useful, with the agreement of at least 80% of the experts. Based on this score, rules were created to construct a score for each student based on how often they chose a more useful strategy rather than a less useful one. The final scores assigned to each student for each task ranged from 0 to 1, and can be interpreted as the proportion of the total number of scores that agreed with the experts' hierarchical order. Higher scores indicate more choices in agreement with the experts' assessments. From those scores, the PISA study constructed two indices using Item Response Theory (IRT): UNDREM (which included the understanding and remembering strategies), and METASUM (which covered the summarizing strategies).
The reading strategies block also included enjoyment of reading, another PISA index constructed via IRT. This index assesses whether the students use reading as a pastime or to find information (rather than as an obligation), and whether they like to talk about books with other people. Higher values in this index indicate more enjoyment of reading. The PISA 2018 technical report gives a detailed description of the indices' construction (OECD, 2023).
It is worth emphasizing that PISA 2018 included an additional metacognition construct that assessed students' abilities to evaluate the credibility of information, an essential skill for the 21st century digital world (OECD, 2021). Students were asked to describe their reaction to receiving an email with potentially harmful content. Given that our study focuses on identifying effective teaching practices for encouraging transversal reading literacy, we excluded this construct from the treatment variables, which only included the traditional reading strategies. In addition, as Suárez-Álvarez et al. (2022) noted, adding specific content about digital skills without making other changes to study plans could be problematic.
To evaluate teaching practices, the students were also asked how often their teachers demonstrated their support in language classes, how often they modified their classes to students' needs, and how often they used reading stimulation strategies. Students' perceptions of their teachers' levels of enthusiasm and interest, and their directing role in the classroom were also assessed. Table 2 shows the control variables and the observable variables used to measure the strategies, reading frequencies, and perceived teaching style, along with a short description for each. All of the treatment variables followed a standardized normal distribution N(0, 1), which made it easier to compare the results of the study.
Dimension | Variable | Description |
---|---|---|
Control variables | Gender | Whether the respondent is a boy or girl. |
Students' social, economic, and cultural index (ESCS) | Reflects the educational or occupational level of parents, possessions in the home, and the number of books in the home. | |
Treatment variables | Enjoyment of reading | Whether the student only reads because they have to, if they think reading is a waste of time, if they like to talk about books with others, and if they only read to find information they need. |
Reading enjoyment and reading strategies | Metacognition: summarizing | Whether the student is aware of effective reading strategies for summarizing texts: checking whither the most important parts of the text are included in the summary, underlining important sentences, rewriting them later in their own words as a summary. |
Metacognition: understanding and remembering | Whether the student is aware of effective reading strategies for understanding and remembering text: discussing the content with others after reading, underlining the most important parts of the text, summarizing the text in their own words. | |
Teaching practices in language classes | Teacher support in language lessons | The students' opinions about whether their teachers show interest in all students' learning, offer additional help if needed, or continue explaining until all students understand the topic. |
Teacher's stimulation of reading | The students' opinions about whether the teachers often encourage them to express opinions about a text, help them to relate what they read to their own lives, or ask questions to prompt active participation from students. | |
Teacher-directed instruction in language lessons | The students' opinions about whether the teachers set clear learning objectives, ask questions to check student understanding, or say what they have to learn. | |
Perceived teacher's interest | The students' opinions about the teachers' levels of involvement, motivation and enjoyment of their work: whether teachers like teaching and address the topic of reading, and if the teachers' enthusiasm inspires the students. | |
Adaptation of instruction in language lessons | The students' opinions about whether the teachers adapt lessons to the needs and knowledge of the class, whether they give individual support, or whether they change the structure of a lesson on a topic that most of their students struggle with. | |
Reading frequency | Frequency of online reading | How often the students read on digital devices (email, online news, etc.). |
Frequency of reading for school | How often the student has had to read different types of texts in class or homework in the previous month. | |
Frequency of reading newspapers and news | How often the students read newspapers and magazines without having to do so. | |
Frequency of reading for pleasure | How often the students read comics or fiction books when they do not have to. |
Procedure
PISA 2018 was applied following the OECD standards (OECD, 2023). Each student completed a test of cognitive items in a session lasting 120 minutes, plus a five minutes break halfway through. Following that, they completed the context questionnaire.
Data Analysis
The present study used the international PISA 2018 database, which is freely available on the OECD webpage, selecting all of the observations of students in OECD countries (OECD, 2019a).
The methodological approach was based on an adaptation of the DiD methodology, which allows transversal data-such as that collected in the PISA study-to be analyzed.
The differences between the control group and the treatment group results were assessed at the same timepoint. Because the main objective of our study was to evaluate the effectiveness of teaching styles and reading strategies, the variable of interest in the teaching group was the students' results in the PISA reading scale. The control group data was the results of the same students in another subject, in this case the results in mathematics. The first step was to calculate the dependent variable, defined as the difference between the students' scores in reading and the same students' scores in mathematics. Mathematics scores were chosen as a control (rather than scores in science) because the correlations between reading and mathematics in PISA are systematically weaker than between reading and science (Anderson et al., 2010), meaning that the effect of the treatment variables is more obvious.
In the PISA study, students' cognitive responses are analyzed using the IRT in combination with a complex imputation methodology for student scores which produces an a posteriori distribution of values for each subject with their associated probabilities, giving rise to what are called plausible values (Martínez Arias, 2006). In PISA 2018, ten plausible values are produced for each performance scale-reading, mathematics, and science. Estimating any populational parameter in PISA requires an estimation using each of the plausible values separately and then calculating the mean of the statistics obtained with each one. The estimation of the final statistic is therefore equal to the mean of the ten estimations of the statistic in question, obtained for each plausible value (OECD, 2009). In line with the PISA analytical methodology, the explained variable in our study was calculated as the difference between each of the ten plausible values in reading and the ten plausible values in mathematics. The standard errors were calculated considering the sample weightings at national level, as well as the variance of imputation. The PISA 2018 technical report includes the details of the calculation process (OECD, 2020).
Following the strategy from Clavel and Mediavilla (2019), the mathematical expression of the explanatory variable is:
where j = 1, 2, …, 10,
pv_readi,j is the j-th plausible value for the reading score of subject i;
pv_mathi,j is the j-th plausible value for the mathematics score of subject i;
difijr-m is the difference between the j-th plausible value in reading for subject i and the j-th plausible value in mathematics for the same subject i.
The second step was to prepare the explanatory variables listed in Table 2. For the gender variable, a value of 0 corresponded to boys, a value of 1 to girls. For the second control variable, ESCS, and for the indices of reading strategies and teaching practices, the scales provided by PISA were used, derived from IRT scaling (OECD, 2020). The indices for reading frequency were calculated as the total of the scores in the variables of reading frequency standardized for the OECD sample.
The third step was to evaluate the model according to DiD methodology. The statistical expression of the model was:
where Xi is a matrix containing the model's control variables (gender and ESCS);
Ti is the matrix of treatment variables (reading strategies, teaching practices for reading, and reading frequency),
δ collects the effect of treatment about the difference in performance between reading and mathematics,
The model was specified using the traditional Ordinary Least Squares (OLS) procedure. This involved calculating the parameters associated with the explanatory variables that reflect the mean effect of each variable on the dependent variable.
The model was specified separately for each sample from each of the 37 OECD countries. The mean of the OECD was also calculated, along with the total standard error. The OECD mean and its sample variance were calculated as follows:
Where wi is the total of the final weights of the students for a given country.
Because of the complexity of the data structure and the PISA methodology, the statistics and final standard errors were calculated using the R package intsvy in a configuration designed for the PISA evaluation (Caro & Przemyslaw, 2017).
Results
Table 3 shows the estimations of the DiD model at the level of the OECD. It also indicates the probability associated with the effect of the explanatory variables on the dependent variable (difference between reading and mathematics).
B | Sig. | ||
---|---|---|---|
Dimension | Explanatory variables | ||
Reading strategy | Enjoyment of reading | 8.810 | *** |
Reading strategy | Metacognition: summarizing | 4.921 | *** |
Reading frequency | Frequency of reading onliney | 3.046 | *** |
Reading strategy | Metacognition: understanding and remembering | 1.763 | *** |
Teaching practices | Teacher support in language lessons | 1.594 | *** |
Reading frequency | Frequency of reading for school | 1.353 | *** |
Teaching practices | Teacher-directed instruction | 0.264 | *** |
Reading frequency | Frequency of reading newspapers and news | 0.098 | *** |
Teaching practices | Perceived teacher interest | -0.115 | *** |
Teaching practices | Teacher's stimulation of reading | -0.119 | *** |
Teaching practices | Adaptation of instruction | -0.185 | *** |
Reading frequency | Frequency of reading for pleasure | -0.688 | *** |
Control variables | |||
Gender (girl) | 26.783 | *** | |
Social, economic, and cultural index (ESCS) | -3.007 | *** |
Note.Significance: *** p <. 001 (The probability values were lower than .000 in all cases)
The results in Table 3 indicate the strong impact on reading literacy of both mastery of strategies for summarizing texts and the enjoyment of reading. Students who read not only because they have to, who do not feel that reading is a waste of time, and who indicate reading as one of their preferred pastimes and a topic of conversation with others had scores in reading which, on average, were nine points higher than mathematics in the PISA scale. The aspect with the next-strongest impact on reading literacy was knowledge of effective summarizing strategies. Students with high scores in this scale had scores in reading that were five points higher than in mathematics. Another strategy associated with higher scores in reading, albeit to a lesser extent, was understanding and remembering. Reading frequency-particularly online reading which included reading news on the internet, searching for information online, and participating in online discussions and forums, as well as frequent reading of fiction, diagrams and maps, or digital texts with links, in class or as homework-was positively associated with reading performance.
The prevalent teaching practices in reading classes or classes aimed at stimulating students' reading did not seem to have a significant relationship with performance. Only the perception of teacher support in language lessons had a positive impact on reading results. Students who indicated that their teachers showed an interest in each student's learning, who provided extra help when students needed it, and who continued explaining things until the students understood exhibited higher scores in reading than in mathematics.
Among the control variables, gender was associated with a significant increase in reading performance compared to mathematics, of up to 27 points in girls. Students with higher ESCS levels tended to score, on average, three points higher in mathematics than in reading. Table 4 shows the estimations of the DiD model in each of the OECD countries.
Note.The grey bars are the estimates of positive impact, the white bars are estimates of negative impact.
Significance:*** p < .01,
**p < .05,
*p < 0.1;
NS Not significant; ND Not available.
As Table 4 shows, enjoying reading, knowing effective reading strategies of summarizing, understanding and memorization, and frequent reading of digital texts had a positive impact on the treatment variable in practically all of the OECD countries. In general, the effects of the different teaching practices were relatively heterogeneous, and varied considerably between countries. Teachers' stimulation of reading-where teachers encourage the students to express their opinions about a text, help students to relate what they read to their lives, or ask questions that encourage students to actively participate-were highly effective strategies in Greece, Denmark, and Lithuania, whereas in Japan, they were associated with lower scores in reading than in mathematics. Moreover, teacher support in language lessons was confirmed as a positive strategy in 27 countries, while adapting teaching to student needs was a positive strategy in eleven. An active teacher-led style also had an impact on performance, although it was positive in seventeen of the OECD countries and negative in fifteen.
Discussion
The objective of our study was to evaluate the impact of various factors, such as teaching practices and knowledge of effective reading strategies, on the reading literacy assessed in the OECD countries in the 2018 PISA study.
The results show that the most effective factor in reading literacy in all of the OECD countries was enjoyment of reading. Students who often read voluntarily and for interest scored a mean of nine more points in reading than in mathematics, even after controlling for the effect of student gender and socio-economic background. These results are in line with the findings from Cheema (2018) and Clark and Rumbold (2006), who showed that reading for pleasure also improved reading comprehension and grammar, encouraged positive attitudes towards reading and pleasure in reading as adults, and improved general knowledge.
Another aspect that had one of the strongest relationships with reading literacy was students' knowledge of effective summarizing strategies for texts. The strong impact of reading strategies on reading competence had been demonstrated in PISA 2009, the previous edition of the study where reading was examined in greater detail (OECD, 2010a, OECD, 2010b). Students who were effective readers prioritized the following summarizing strategies: reading the whole text, underlining the most important sentences, writing them out later in their own words, or checking carefully whether the most important parts of the text are covered by the summary, at the same time as ruling out copying all the possible sentences. Metacognition with regard to deploying effective strategies for understanding and remembering were also positively related to reading performance, albeit to a lesser extent. These strategies prioritize debate about the content of texts with other people and are less about strategies such as reading the text twice very quickly or aloud, or focusing on the parts of the text that are easy to understand. There is a solid base of scientific evidence showing that direct teaching of effective reading strategies contributes to increased student reading abilities (Pressley, 2000; Rosenshine & Meister, 1996; Waters & Schneider, 2009), which is why teachers must work on reading strategies throughout students' schooling.
In most educational systems, and despite its importance in developing other competencies, reading is not taught as an independent subject to 15-year-old students in the same way as mathematics or science (OECD, 2019b), which is why the limited role of the teacher in promoting adolescents' reading literacy is clear. In most cases, reading habits hare already shaped by the family context or previous schooling (García-Crespo et al., 2019; García-Crespo et al., 2019; García-Crespo et al., 2022; Levy, 2018). Nonetheless, guidelines for effective reading may be given explicitly or incidentally in language lessons or in other subjects (OECD, 2019a).
The frequency of online reading-such as emails, online news, or internet searches-is another factor that was related to better reading performance than mathematics in PISA 2018. This result confirms previous findings that practices related to searching for information online explained a significant, albeit small, part of the variance in digital reading skills (Naumann, 2015; OECD, 2010a). The frequency of reading in class or as part of homework was also related to better reading results, although to a lesser extent.
In terms of the teaching practices we evaluated, students whose teachers showed an interest in each student´s learning, who provided extra help when students needed it, and who continued with explanations until students understood the topic scored a mean of 1.6 points more in reading than in mathematics. Previous research has also shown that support activities and strategies that the teacher provides for the student to construct knowledge and acquire autonomy and self-concept improve students' performance in reading, increase awareness of reading strategies, and encourage student participation in reading activities (Guthrie et al., 2012; Guthrie et al., 2013). Other strategies we analyzed in this study, such as teacher-led instruction, teachers' enthusiasm, teachers stimulating reading, and adapted teaching did not demonstrate a significant impact on promoting reading at the OECD level. However, these practices were effective in some countries. These included Greece, Denmark, and Lithuania, where students whose teachers encouraged their students to express opinions about texts, helped students to relate the stories they read to their own lives, or asked questions that encouraged students to actively participate, performed significantly better in reading than in mathematics.
The OECD identifies various aspects related to common teaching policies for high-performing countries. These include a varied, tailored offering of opportunities for continuing professional development and teacher assessment mechanisms with a strong focus on the design of individualized educational trajectories (OECD, 2018). Teaching effective reading strategies, as well as techniques and practices that encourage reading enjoyment should be included in teachers' continuing training because of their importance and high impact.
In addition, programs promoting early starts with reading in the first few years of schooling, along with activities that promote family involvement in reading together are the foundation for creating sound reading habits and enhancing enjoyment of reading (Levy, 2018), allowing the education of a generation of active, competent readers.
The limitations of the present study are mainly related to the characteristics of the data collection in the PISA study. The fact that the results of the study indicate that the teaching practices we evaluated do not exhibit consistent effects between countries may be due to the type of evaluation. The PISA data is collected via self-report questionnaires, where the student response is according to a specific school and the teachers responses are according to a specific subject, without their being a subsequent link between the responses and, for example, teaching characteristics or styles, and without considering past learning experiences which would have no doubt had an impact on student reading performance. These limitations highlight the need to improve the design of studies aimed at identifying factors associated with performance.
Despite these limitations, and largely due to the data collection process, the international nature of the PISA study gives researchers a singular opportunity to identify universal mechanisms capable of improving educational quality, creating a sound base of scientific evidence that is indispensable for producing educational policies that are based on the principles of efficacy, efficiency, and equity.