Since the 1970s, international research into the effectiveness and improvement of schools has produced a fairly comprehensive level of knowledge of the factors that distinguish successful schools. During the last two decades, increasing attention has been paid towards schools defined as ‘failing,’ ‘low-achieving’, ‘underperforming,’ or ‘declining’, a categorisation which follows low student performance on standardised tests and consecutive years of failure to meet targeted levels of achievement (Murphy & Meyers, 2008). In summary, there is a wide range of factors that may lead schools into a spiral of decline. These factors include difficult conditions at home, student behaviour problems, high student turnover, low student achievement levels, low teacher qualification levels, high faculty turnover, and lack of school management and leadership (Altrichter & Moosbrugger, 2011; Altrichter et al., 2008; Clarke, 2004; Datnow & Stringfield, 2000; Hochbein, 2012; Mujis et al., 2004; Murphy, 2008; Potter et al., 2002).

In particular, the first article in this issue addresses schools in decline with a specific focus on how to identify them using data and indicators for early intervention. The other articles explore issues and approaches related to educational quality on various levels, such as school self-evaluations, teachers’ data-driven decision making, students’ engagement with feedback, and the advantage of association questions in multiple-choice tests.

1 Articles in this issue of EAEA 2/2021

In the first article, Meyers, Wronowski, & van Groningen argue that a strong focus on school improvement seems to have limited our ability to understand the factors that might contribute to school decline. Consequently, school improvement can be understood as work that only needs to be undertaken when a school is labelled as “failing” rather than continuous work that schools should be doing proactively. In their study, they analysed more than 10 years of longitudinal student achievement data in English or other language arts and in mathematics in the state of Texas, USA. They find that a considerable number of schools consistently decline in student achievement over time. In this paper, the authors report significant predictors of decline, such as shifting student demographics and changes in the percentage of economically disadvantaged students. Interestingly, while higher starting percentages of students labelled as English language learners increased the likelihood of decline, increasing percentages of English language learners over time reduced the rate of decline. The authors also found that leadership stability appears to be important to prevent school decline. Generally, their aim was to develop early warning indicators to tailor school improvement responses at the earliest stage.

In the second article, Aderet-German reports on a study of school self-evaluation (SSE) that was established in a semi-private school network in Israel. The author’s focused on staff perceptions of goals and organisational mechanisms related to sustainable school improvement and accountability practices. Based on a case study approach using interviews and observations, the author found that several factors contributed to an evaluation model that aligned development and accountability measures in a balanced way. The most important factor was that evaluation procedures should be continuously adapted to the needs of the schools, which emphasized the focus on locally learning and developing and increased the capacity to build the professionalism of the school actors involved.

In the third article, Schelling & Rubenstein address how data-driven decision-making (DDDM) has become a key element in school governing. The state of Indiana has adopted Indiana Statewide Testing for Educational Progress (ISTEP), a state assessment system, as well as Evaluation Model for Indiana Teacher Effectiveness Rubric (RISE), both focusing on assessment and data use. Through focus group interviews, the authors explore how teachers in this state perceived DDDM and how they used formative assessment data to guide instructional adaptations. Their findings showed for instance that teachers adapt their lessons mostly through “in-the-moment” decisions based on students’ reactions to their instruction. Moreover, when they analyse performance data, they prefer to use data from standardised assessments rather than the formative assessments that they had themselves created because the former is mandatory. Administrators’ attitudes towards DDDM were important for teachers’ perceptions, and they used the experiences of their peers to learn and further develop the practices of DDDM.

In the fourth article, Van der Kleij & Lipnevich report on their review of research literature on students’ perceptions of feedback as a basis for further learning. After analysing 164 studies, the authors identified several challenges within the field, such as the lack of common theoretical foundations, isolated bodies of cumulative research occurring in different sub-fields, repetitiveness of studies, research being driven by practical concerns such as student dissatisfaction or teacher frustration, and methodological problems. In particular, the authors point to the consequences of the lack of theoretical frameworks that lead to the application of a wide range of examined variables and difficulties with respect to direct comparisons between studies to generate useful insights. The authors identify important gaps such as the need to explore how student perceptions of feedback relate to engagement with feedback and subsequent meaningful outcomes, and many studies fall short in addressing practical implications.

In the fifth and final article, Lai, Jong, Hsia, & Lin address association questions (AQs) as a novel form of multiple-choice questions (MCQs). The advantage of using AQs is that learners must recall the concepts denoted by the given terms, affirm their connections, and then select the term with a denotation that is less linked to other concepts. Based on an experimental approach involving two groups of participants enrolled in programmes at a university in Taiwan, the authors found that the use of optional online AQ tests for reviewing course contents helped with knowledge retention and was especially suitable for learners with medium levels of initial knowledge.

2 Some reflections

One important theme across the articles in this issue concerns data, indicators, and feedback as a basis for monitoring quality, learning, and development. Meyers et al. in the first article argue that an early warning system and increased awareness of indicators already available in many school governing systems can be used to identify schools facing challenges that could be the start of a declining process unless action is taken to support the school.

In addition to increased awareness and earlier interventions based on data, this issue also points to the importance of key actors to develop ownership to the data in school evaluations to improve schools, as raised by Aderet-German. Teachers also use immediate responses froms students as a main source of feedback to improve instruction (see Schelling and Rubenstein). Finally, this issue includes an important contribution regarding the major challenges of research into students’ engagement and feedback. The lack of theorisation in this field is also evident in other areas concerning data use (e.g. Firestone & Donaldson, 2019; Prøitz el al., 2017) and the article by Van der Kleij & Lipnevich represents an important contribution to guide future empirical research.