1 Introduction

According to the UNESCO report IESALC (United Nations Educational, Scientific and Cultural Organization [UNESCO], 2020), there has been a remarkable surge in enrolment in HEIs worldwide over the last two decades, increasing from 19 to 38% between 2000 and 2018. This rapid growth has led to a greater variety of students at universities, but institutions do not have a good understanding of the specific characteristics of these students. This creates challenges in areas such as teaching quality, dropouts, and educational management. In this context, universities are concerned with quickly identifying problems that may affect students’ academic success, and one of the biggest problems is how they are doing in their studies (Barahona et al., 2016). To deal with this, universities are looking for new tools that help achieve student learning outcomes and, in this sense, EAS are the tools with the greatest potential to achieve this objective (Cele, 2021).

The EASs are instruments that help to promptly identify students who have academic problems and generate opportunities to support these students. Thus, while studies on the search for the causes and characteristics of students who drop out are retrospective analyses of an event, support methodologies such as those referred to EAS are mainly experiences of innovation in academic management that seek active, timely and effective ways to help solve different problems in the academic field, through intelligent information management (Castillo & Alarcón, 2018; Donoso-Díaz et al., 2018). In the context of the technological resources that support these initiatives, since 2010 the EAS technological development market has been promoted, aimed at covering certain needs demanded by HEIs. Some examples are software for university academic management such as the SATD program - which, based on historical dropout indicators, allows predicting student dropout -, or the Centinela software that proposes a multi-level concept of student monitoring (Donoso- Díaz et al., 2018; Casanova et al., 2021). However, despite the availability of this type of software aimed at minimizing the risks of dropout derived from poor academic performance, it is necessary to develop new tools that specifically focus on identifying students’ weaknesses in academic aspects, with the purpose to strengthen their learning, ensuring the acquisition of fundamental concepts for higher courses and therefore contribute to the student’s graduation profile.

Based on this need and in the context of higher education and EAS, the gap in this research lies in the ability to analyse student performance at a specific level. Unlike conventional systems, which typically issue alerts based on final exam results, our innovative approach focuses on breaking down these results according to specific CS assessed in quizzes. Historically, EASs have offered a panoramic view of academic performance, identifying areas of risk, but without delving into underlying cognitive competencies. This new system allows for a detailed assessment of how students are addressing particular CS. For example, instead of simply noting that a student is at risk in a general course, our system can in real time specifically flag which conceptual skills are at risk and require support. This capacity for analysis at the level of CS not only provides a more complete image of student performance, but also more precisely guides teaching and learning strategies as proposed in other studies (Morales, 2018; Yağcı, 2022). Big data analysis has become essential to address challenges in student learning (Cele, 2021). However, it is very important to take into consideration that the success of an academic implementation depends largely on promoting a collaborative environment among the group of academics (Jong et al., 2022; Krichesky & Murillo, 2018). In this context, this study considers the application of a teacher perception survey for analysis in various aspects.

1.1 EASs implemented

EASs have been widely used tools in the educational field around the world for at least two decades. This system is designed to proactively identify and address students’ academic challenges before they become larger problems that put the student’s academic success at risk. These systems consist of two main components, the first incorporates alerts or “red flags” that are then communicated to teachers or other professionals in the area, showing the problem that students have in one or more academic areas, so that appropriate measures can be taken in advance. The second component is intervention, this can include any “strategic method within reach” that addresses the problem(s) identified through the alert system, either in a general way or individually for the students who require it. Among the measures that are frequently carried out are: additional support, academic advice or referral of the student to specialized resources. All these actions are aimed at intervening early on the student’s problems and alleviating any negative impact during the educational process. Several higher education institutions have promoted the use of the EAS, as some can be seen in Table 1.

Table 1 Features of some EAS implemented in HEIs.

1.2 Use of technological tools in EAS

To ensure the success of an EAS in the academic field, it is essential to have modern technological tools aligned with institutional objectives. In the global panorama of educational software, various algorithms dedicated to the early identification of students at academic risk stand out, such as the Student Success System by Desire2Learn (Essa & Ayad, 2012) and the Study Dashboard platform by Khan Academy (Mi, 2019). Likewise, the Starfish Enterprise Success Platform system and the alert application of the Educational Data Research Institute of the University of Electronics, Science and Technology in China offer similar approaches (Mi, 2019). Among the systems designed for this purpose, the Dropout Early Warning System (DEWS) uses historical data to forecast the probability of student dropout, using quantitative and qualitative approaches. Furthermore, the Centinela program, implemented at the Universidad Católica de la Santísima Concepción (Chile), adopts a multilevel approach to raise academic quality (Casanova et al., 2021).

Currently, many HEIs are generating and implementing their own algorithms to obtain predictive models with indicators to estimate academic performance based on the characteristics of their students (Mi, 2019; Duong et al., 2023) and, more specifically, EAS have been used to monitor students’ progress and predict their academic performance in individual courses, thus providing both instructors and students with the opportunity to make early interventions (Colby, 2005; Macfadyen & Dawson, 2010; Baepler & Murdoch, 2010). However, despite the growing implementation of EAS in HEIs, there is a lack of effective tools implementation that can carry out massive data analysis for real-time monitoring of the learning of specific cognitive skills in each subject. This represents an important gap in terms of targeting the effective achievement of the expected learning outcomes.

In this research, we use the Power BI tool, a business intelligence software adapted for the collection, analysis and visualization of data on students’ academic performance in the expected CS. This approach not only facilitates real-time reporting, sharing key information with academic members and coordinators, but also aids management and decision-making in relation to teaching and learning strategies for the CS expected of our students. The integration of Power BI into our research provides a comprehensive and effective perspective to optimize the implementation of academic strategies, thus closing the gap identified in the current literature.

1.3 Research’s purpose and contributions

The objective of this research was to develop a new EAS based on the analysis of the percentage of approval of the questions of on-site evaluations of a highly complex subject in first-year students at a South American university. Specifically identify whether students achieved the expected CS, accompanied by timely measures to strengthen teaching strategies for critical content. On the other hand, to evaluate the impact of the EAS, the academic performance of the students where the EAS was implemented - overall and for each major - was analysed and compared with another campus where it was not implemented. Finally, to evaluate the professor’s perception regarding the use of the EAS, a perception survey was applied to the academic team that participated in the process. In summary, the greatest contribution of this research is the design and implementation of a new EAS that helps measure student learning in a highly complex offline course in HEIs, as well as to identify classes of students with low performance on quizzes, to allow early intervention and achieve learning accomplishments.

2 Methodology

2.1 Sample description

The sample consisted of 1,691 students from different careers in the health area who enrolled in General Biochemistry at two campuses of a South American university, during the second academic period of the year 2022 (2022-20). The campus where the EAS was implemented (experimental group) was made up of 994 students, while the other one, where it was not implemented (control group) was made up of 697 students.

For the analysis of professor’s perception regarding the implementation of the EAS, the sample size was made up of 11 academics who taught the subject at the location where the EAS was put into practice.

2.2 Quizzes design

All Assessment Tools (AT) used in the EAS were developed based on assessment blueprints, which were conceived and refined over a three-year period. These tables outlined the learning outcomes, procedural resources, and CS that the student was expected to demonstrate in each question of each evaluation, taking Marzano’s taxonomy as a reference. The preparation of question banks was carried out with the active participation of more than 30 teachers who, throughout history, taught the subject. These banks were created in the years 2020 and 2022, and the questions that made them up were subjected to an exhaustive analysis by the entire teaching team, under the supervision of a subject coordinator. This process guaranteed the validity and reliability of the questions used in the present study.

In this way, both the teachers of the Experimental campus and those of the Control campus had access to a common and validated bank of questions, which was used as input in all evaluations applied (quizzes and exams) to each section in their charge. This collaborative and structured approach helped ensure consistency and equity in evaluations. The AT consisted of single-choice questions, ranging from 9 to 10 questions, and were administered on three occasions throughout the 2022-20 academic period. The review of the responses obtained in the AT was carried out using an optical reader (G3 SPA, Chile), followed by a detailed analysis of the results through the Power BI program described in the following section.

2.3 Design and architecture of the EAS

For the analysis of the approval percentages per question on a general level, for each teacher and for each section, we first proceeded to create a database in the Excel program, which contained all the answers provided by the students who took the quizzes. Subsequently, the data analysis was carried out using the Power BI program, focusing on the level of approval of each of the CSs associated with the questions evaluated in the quizzes. The results of the analysis were presented through a report available online with access for the professors who taught the subject. This report showed a detailed vision about the CSs that were evaluated in each question of the quizzes (AT), the approval percentage corresponding to each question according to the teacher and the section, as well as the general approval percentage of each question shown in Fig. 1.

The results were updated automatically as the teachers entered the information in the database and, if the approval percentage of the CS was less than 30%, it was shown in the report as highlighted in red, to alert the professor and the team for the generation of teaching-learning strategies of the evidenced weak skills.

Fig. 1
figure 1

Report available online for the teaching team. Shows the CS of each question (left panel), percentages of correct answers for each question associated with each teacher (upper right panel), percentages of correct answers for each question associated with each section-teacher or Course Registration Number (CRN) (middle-right panel) and a data segmenter with the name of each teacher (bottom right panel)

After the data analysis for each quiz (the following week), the teaching team gathered to reflect upon and discuss the overall results and results by teacher presented by the online report, with the aim of sharing experiences and building strategies of collaborative teaching-learning to improve the acquisition of the expected CS. The strategies implemented in the classroom depended on the critical CS evidenced, the students’ profile and the professor’s experience, among which we can mention problem-based learning, flipped classroom, work guides oriented to case-resolution and reinforcement capsules.

To measure the impact of the academic intervention, the final academic performance -general and by major- of the students was analysed based on all the evaluations that they had to take during the semester according to the course planning. On the other hand, to measure the teacher’s sensation about the implementation of the EAS, a perception survey, and a questionnaire of 4 open questions were applied at the end of the EAS. Figure 2 shows a flow chart that summarizes the steps followed in the EAS.

Fig. 2
figure 2

Flowchart that indicates the steps implemented in the EAS

2.4 Perception survey for teachers

The survey used in this study was based on the instrument developed by Sanabria (2011) and Vizcaíno et al. (2018), which was originally validated through expert judgment (content validity) through an inter-researcher evaluation. Subsequently, the instrument was subjected to a piloting process and an analysis of its reliability was carried out, obtaining a Cronbach’s Alpha coefficient of 0.912.

To adapt the survey to the context of this study, items related to commitment, quality, communication and interaction, and time investment are selected from the original survey. These adaptations were necessary to address the particularities of the population and the specific objectives of this research. The items were grouped so that the adapted Perception Survey consisted of 20 items distributed in 3 dimensions linked to the steps of EAS implementation. These dimensions were detailed below: (1) Teacher commitment in the use of the EAS platform and in meetings, (2) Teacher opinion on the implementation of the EAS platform, (3) Influence of the EAS on the work of the teaching team.

For each item, the survey presented response options using a 5-category Likert scale, where the options were: (1) in total disagreement/never, (2) disagree/almost never, (3) neither agree nor disagree/sometimes, (4) agree/almost always and (5) totally agree/always.

2.5 Statistical analysis

For the descriptive analysis of the perception survey for teachers, the response percentages were presented, based on the absolute and relative frequencies of the categories associated with the responses of each of the dimensions evaluated in the 20 survey items.

For the distribution’s analysis of the academic performance variable, an Anderson-Darling normality test was applied with a significance level of 0.05 (5%). Subsequently, for the comparative analysis between the experimental group and the control group, the statistical t-student test was used in the case that the variable distributed normally or the Mann-Whitney test in the case that it did not, defining a level of significance of 5% (α = 0.05). Minitab Statistical Software version 21.2.0.0 (© 2022 Minitab, LLC) was used for the analyses.

3 Results

The results of the study are presented below in two sections. The first section presents the results of the analysis of approval’s percentage of the CSs detected by the EAS, highlighting those that presented a critical performance. In the second and final section, the results of the perception survey applied to teachers and the results of the final academic performance of the students -general and by major- where the EAS was implemented, compared with the control group, are presented.

3.1 Performance results of the CSs evaluated in the highly complex subject

The results of the approval percentages of the CS corresponding to the first quiz are displayed in Fig. 3.

Fig. 3
figure 3

Approval percentages for each question measured in the first quiz. The approval percentages per question for each teacher and the total approval percentage per question are indicated. The highlighted cells indicate the questions that presented an approval level of less than 30% (critical level)

The overall (total) performance analysis of the first quiz shows that only 1 question (5) had a general approval percentage greater than 70%, while 6 questions (1, 2, 3, 6, 9 and 10) had between 50% and 60% approval and 3 questions (4, 7 and 8) had a performance of less than 50%. On the other hand, for the analysis carried out specifically by question, the data of 10 academics who entered the information required for the analysis were used (professor 2 was excluded). Thus, the individualized analysis of the percentage of approval of the questions indicated that 20% of the academics (2 out of 10) had a critical performance in questions 1, 4, 8 and 9 and that 10% (1 out of 10) performed critically on questions 3 and 6 (Fig. 3, boxes marked).

Additionally, it can be observed that, out of the academics who promptly entered the information, 40% did not present questions with critical performance (professors 1, 6, 8 and 9), while 30% of the academics had only one question with critical performance (professors 3, 7 and 10), 20% presented 2 questions with critical performance (professors 4 and 11) and 10% presented 3 questions with a critical level of approval (professor 5).

The results, corresponding to the second quiz, are shown in Fig. 4.

Fig. 4
figure 4

Approval percentages for each question measured in the second quiz. The approval percentages per question for each teacher and the total percentage of approval per question are indicated. The highlighted cells indicate the questions that presented an approval level of less than 30% (critical level)

The analysis of general performance (total) of the second quiz shows that question 1 had a general approval percentage greater than 70%, while 7 questions (2, 3, 4, 6, 7, 8 and 10) had between 50% and 70% approval and 2 questions (5 and 9) had a performance lower than 50%. As shown in the boxes highlighted in Fig. 4, the specific analysis of the percentage of teachers who had questions with less than 30% approval (critical level), shows that 45.5% of the academics (5 out of each 11) had low performance in question 5, 27.3% (3 out of 11) in questions 9 and 9% (1 out of 11) in questions 2, 3 and 10.

Additionally, it was observed that 45.5% of the academics did not present questions with low performance (professors 1, 3, 6, 8 and 10). On the other hand, 36.4% of the academics had only one question with a low level of approval (professors 4, 5, 7 and 9), 9% presented 2 questions with a low level of approval (professor 11) and 9% presented 5 questions with a critical level of approval (professor 2).

The results, corresponding to the third quiz, are shown in Fig. 5.

Fig. 5
figure 5

Approval percentages for each question measured in the third quiz. The approval percentages per question for each teacher and the total percentage of approval per question are indicated. The highlighted cells indicate the questions that presented an approval level of less than 30% (critical level)

The analysis of general performance (total) of the third quiz shows that 7 questions (1 to 6 and 8) had a percentage of general approval greater than 70%, while 2 questions (7 and 9) had between 50% and 70% approval and, there were no questions with a performance lower than 50%. For the analysis carried out specifically by question, the data of 10 academics who entered the information required for the analysis were used (professor 1 was excluded). Thus, the individualized analysis of the percentage of approval of the questions indicated that only 10% of the academics (1 out of 10) had a critical performance in question 7 and question 9 (Fig. 5, boxes marked), while all the others had a performance above 36% approval.

Additionally, it was observed that 80% of the academics did not present questions with low performance (professors 2, 3, 6, 7, 8, 9, 10 and 11). On the other hand, only 20% of the academics had a question with a low level of approval (professors 4, 5).

3.2 Results of the teacher perception survey about the EAS

Analysis of dimension 1

For the analysis of teacher participation in the implementation of the EAS, 7 items from the perception survey were evaluated, which analysed the professor’s commitment in the different steps that they had to carry out.

As shown in Table 2, more than 50% of the teachers stated that they had reviewed the tutorials of the platforms associated with the EAS, marking the options “always” or “almost always”, while 36.4% (4 of 11) stated that they “sometimes” reviewed it and only 9.1% (1 of 11) stated that they “almost never” reviewed the tutorials. In relation to the time of application of the evaluations by the teachers, 72.7% (8 of 11) and 18.2% (2 of 11) respectively stated that they “Always” and “Almost always” applied the evaluations in the date and time declared by previous planning, while only 9.1% (1 of 11) declared that “Sometimes” they were able to perform this item.

Table 2 Results of the teacher perception survey for teacher’s commitment in the EAS

Regarding the evaluation review system, 90.9% (10 of 11) declared that they had promptly carried out the revision, while only 9.1% declared that they had done it “sometimes”. For the item of information entry in the database, 63.3% (7 of 11) declared that “Always” timely entered the information and 36.4% declared that “Almost always” did so.

For the item related to reviewing the report (Power BI dashboard), 54.5% (6 of 11) and 18.2% (2 of 11) stated that they “Always” and “Almost always” performed this step in the implementation of the EAS, while 18.2% declared that “Sometimes” and 9.1% that “Almost never”. Regarding the item of meetings attendance with the teaching team to discuss the results delivered by the Power BI panel, 18.2% declared that they “Always” attended the meetings, while 72.7% (8 out of 11) declared “Almost always” and 9.1% that “Never” attended meetings related to the EAS. These last data are correlated with the attendance percentages recorded by the research team, where 27.3% (3 of 11) attended 100% of the meetings associated with the EAS, 54.6% (6 of 11) missed a single session, while 9.1% (1 of 11) missed 2 and 3 sessions.

Finally, in relation to active participation and the generation of ideas to strengthen the critical CS evidenced by the EAS, 36.4% declared that they “Always” carried out this activity during the meetings, 18.2% declared that " Almost Always”, 27.3% declared “Sometimes” and 18% declared that they “Almost Never” carried out this activity.

Analysis of dimension 2

For the analysis of the teacher’s opinion in the development of the EAS, 8 items from the perception survey were evaluated. As it can be seen in Tables 3, 45.5% of the teachers (5 out of 11) declared to “Totally agree” and 54.5% (6 out of 11) to “Agree”, that the EAS implemented is a tool that helps identify critical CSs in a timely manner and improves access to information for decision-making by the academic team.

Table 3 Results of the teacher perception survey for teacher’s opinion on the implementation of the EAS

Regarding the number of interactions between the members of the academic team, 45.5% (5 of 11) stated that they “Strongly agree” that the number of interactions increased with the implementation of the EAS, as well as 36.4% (4 out of 11) stated that they “Agree” on this item and 18% (2 of 11) did “Neither agree nor disagree”. For the item that indicates that the EAS “does NOT contribute anything new” as an academic management tool, 54.5% (6 of 11) stated that they “Disagree” and 45.5% “Strongly disagree” with this statement. In a similar statistic, 45.5% of teachers stated that they “Strongly disagree” and 45.5% “Disagree” that the use of the EAS represented a “major waste of time for the teacher”, while only 9.1% stated that they were “Neither agree nor disagree” with the above.

In relation to whether the EAS’ implementation is considered a “significant contribution that helps to improve the quality of teaching” and if it “encourages collaborative work among the members of the teaching team”, 27.3% of the teachers declared that they “Strongly agree”, likewise, 63.6% “Agree” with these statements, while only 9.1% were “Neither agree nor disagree” with these items. Regarding the item that refers to whether the implementation of the EAS is an “innovative resource” of the management process in academia, 36.4% stated that they “Strongly agree” with this statement and the remaining 63.6% pointed out that they “Agree”.

Analysis of dimension 3

For the analysis of the teacher’s opinion on the EAS’ influence on the collaborative work of the academic team, 5 items from the perception survey were evaluated. As can be seen in Tables 4, 18.2% of the teachers (2 of 11) stated that they “Strongly Agree” and 54.5% (6 of 11) “Agree” that the use of the EAS helped to develop teamwork skills with professors, while 27.3% stated that they “Neither Agree nor Disagree” with this statement. On the other hand, regarding whether the academic meetings facilitated the identification of the critical CS, 45.5% of the teachers stated that they “Strongly Agree” and 36.4% “Agree” with this item, while 18.2% were “Neither Agree nor Disagree”.

Table 4 Results of the teacher perception survey for influence of the EAS in teacher’s collaborative work

Regarding “dialogue and discussion” and its influence on decision-making, 27.3% of the teachers stated that they “Strongly Agree” and 27.3% “Agree” that these instances of interaction with the group of teachers helped each team member learn about decision-making, while 45.5% “Neither Agree nor Disagree” with this statement. In this same line, in item 4, which evaluated whether the meetings helped to apply teaching and learning strategies to strengthen the critical CS detected by the EAS, 9.1% “Strongly Agree”, 72.7% “Agree” and 18.2% “Neither Agree nor Disagree” with this statement. Finally, to find out if the EAS implemented helped professors to identify their strengths and weaknesses as a member of the academic team, 45.5% of the teachers “Strongly Agree”, 27.3% “Agree” with this statement, while 27.3% “Neither Agree nor Disagree”.

3.3 Final academic performance and by student’s major

To evaluate the impact that the EAS’ implementation had on the content learning of each subject, the student’s academic performance in both groups was analysed, by both major and general level. As shown in Fig. 6a, the results of the median analysis indicate that at a general (total) level there was a significant increase in the scores of the campus where the EAS was implemented compared to the control site (p = 0.000). The analysis of the percentage distribution based on final grade intervals, as shown in Fig. 6b, indicates that on the campus where the EAS was implemented, there is a lower percentage of students with grades in the lower intervals. Thus, there was a 2% decrease in the 1.0-1.9 interval, 3% less in the 2.0-2.9 interval, and an 8% decrease in the 3.0-3.9 interval. On the other hand, there was an increase in the percentage of qualifications in the upper intervals, with an increase of 11% in the interval of 4.0-4.9 and an increase of 1% in the interval of 5.0-5.9. It is very important to mention that both the admission profiles of the students as well as the academic team that taught the subject were equivalent between the locations where the EAS was and wasn’t implemented. Additionally, and as a second experimental control, the academic performance of the students in other subjects taken during the semester was compared, finding no significant differences in the median comparison analysis. These results suggest that the implementation of the EAS is most likely responsible for the increase in the scores of the highly complex subject.

Additionally, the analysis by major is shown in Fig. 6a, indicating that although 87.5% of the disciplines (7 out of 8) had an increase in the median grade at the location where the EAS was implemented, only 3 majors in the health area did so significantly (disc. 6, 7 and 8), which correspond to 37.5% of the majors in the health area that contain this highly difficult subject in their curricular plan.

Finally, regarding the analysis of the approval rates at a general level, as shown in Fig. 6c, the campus where the EAS was implemented had 12% more students approving compared to the control campus. The analysis by major shows that there was an increase in the percentage of approval in every major, where 2 disciplines showed minor differences in percentage, discipline. 2 (4%) and discipline 1 (7%), while other disciplines showed greater percentage differences, such as discipline 3, 4, 5, 6, 7 and 8. Where the one that showed the greatest difference was discipline 3.

Fig. 6
figure 6

Final academic performance and by student’s major. (a) Comparison of the final grades’ average with and without the implementation of the EAS, at a general level by campus (total) and by majors (disc. 1 to 8). (b) General distribution of academic performance with and without the EAS, expressed as the percentage of students who were in the different grade intervals. (c) Approval percentage at a general level and by major (8 disciplines) with and without the EAS’ implementation. Value *p < 0.005

4 Discussion

This study proposes a new EAS based on digital tools for massive data analysis to identify weak CS in quizzes in a challenging subject in a HEI and generate teaching and learning strategies to strengthen the weak contents detected.

This research focused on three aspects: the development of the EAS to identify students’ weak cognitive abilities, the teacher’s perception of its implementation and the impact on learning, measured by analysing the students’ academic performance. For the implementation of this EAS, effective coordination was established between the disciplinary coordinator of the subject and the teaching team. This made it possible to efficiently analyse the academic performance of the CS of each section in which the EAS was implemented.

Unlike other EAS implemented in HEIs, which focus on predictive analyses of academic performance and student persistence (Hall et al., 2021; Ferguson, 2012; Pistilli & Arnold, 2010; Sclater et al., 2016), or that evaluate factors mainly related to the student, such as low performance, low class attendance or the identification of student problems (Donoso-Diaz et al., 2018), the system introduced in this research has a different focus. This system focuses primarily on providing real-time information to teachers, allowing them to evaluate their teaching practices in relation to student performance on the assessed CS during quizzes. In this context, previous research has indicated that the grades obtained in these evaluations are a relevant predictor to anticipate students’ final grades (Yağcı, 2022).

Based on the results obtained by our implemented EAS, a substantial benefit was identified in terms of student achievement. The analysis of the CS procured by the students in their quizzes was used as a starting point to provide continuous support to the teacher. This approach not only sought to correct deficiencies, but also to generate proactive instances of teaching and learning, thus improving the quality of the education provided.

It is relevant to highlight that, when analysing the grades obtained in exams, a significant impact was observed in the population of students who achieved grades in the interval of 4.0 to 4.9. This group experienced an 11% increase in approval rate compared to the campus that did not implement EAS. However, the impact on the highest-grade interval (6.0 to 7.0) was more moderate, recording a 1% increase. These findings suggest that the EAS especially benefits students with lower performance, specifically in the grade interval between 3.0 and 3.9, thus contributing to the reduction of the gap between the different students’ academic level (Fig. 6b).

It is crucial to note that the samples analysed shared similar academic characteristics, such as college admission scores and high school grades (data not shown). However, differences in other variables were not explored, such as sociodemographic, socioeconomic, and personal factors, which have been identified as possible influences on academic performance (Cele, 2021). In future research, extending the EAS to the institution’s administrative team is presented as a valuable perspective. This would allow a more complete analysis of student performance, attendance at evaluations, and the identification and addressing of possible administrative problems, following successful practices implemented by other HEIs (Carvajal et al., 2016; Donoso-Diaz et al., 2018; Castillo & Alarcón, 2018; Macfadyen & Dawson, 2010).

It is essential to highlight that, although collaboration between teachers is closely linked to the learning of teaching practices (Jong, 2022; Krichesky & Murillo, 2018), putting these instances of collaboration into practice represents a challenge to consider when implementing this EAS. In this context, the implementation considered several factors that influence teaching collaboration, previously studied (Vangrieken et al., 2015). These factors included the teachers’ attitudinal skills, their professional abilities, the ability to work in teams, and the clarity of objectives. These aspects could be previously analysed, since more than 70% of the teachers (8 of 11) who participated in this study had taught the subject and had worked together in academic periods prior to the study.

Regarding the participation of the teaching team in the implementation of the EAS, the results indicated that 80.9% of the teachers attended at least 2 of the 3 meetings, while only 9.1% did not participate in any sessions. Among the causes of absences mentioned by teachers (in a different perception survey made of critical thinking questions; data not shown), the main highlights were lack of time, schedule limits and work overload, common problems in the academic field worldwide (Castilla et al., 2021). In relation to active participation in the generation of ideas and proposals, only 36.4% of teachers stated that they “Always” actively participated, while the rest were distributed among the rest of the options. We do not have clear information that explains the passive participation of some teachers, although studies indicate that this may be due to various factors (Vangrieken et al., 2015). However, a more in-depth analysis of this aspect is required in future research.

Regarding the teachers’ perception of the EAS and its contribution to academic management, the results reflect a generally positive perception. Most teachers affirm that the implementation of the EAS is a tool that helped to timely identify critical aspects of their students’ learning, improved access to information and generated more instances of interaction between teachers. These elements are fundamental for adequate pedagogical practice, optimal professional development and the strengthening of educational quality, according to academic studies (Castillo & Alarcón, 2018; Donoso-Díaz et al., 2018). Furthermore, there is a high agreement among teachers on certain items of the second dimension analysed, referring to the characteristics of the EAS. Professors agree that this academic management tool is innovative, encourages collaborative work and does not represent a significant loss of time for the professor, constituting a meaningful contribution to improving the quality of teaching. These survey results are very interesting, since if academics consider that this implemented EAS is a tool that contributes significantly to the quality of teaching work, it is very likely that they will want to use it again.

Among the weak CS detected in the students were mainly the lack of understanding of content from previous subjects and difficulty in the integration, understanding and relationship of concepts. To address these weaknesses of specific CSs, teachers used as teaching and learning strategies, concept maps and blackboard diagrams using the constructivist approach, the flipped classroom modality to strengthen the concepts of previous subjects and, less frequently, Problem-based learning to give an application to the new concepts taught in the classroom. In this context, the use of concept maps as teaching-learning tools was considered the experience with the greatest positive impact that facilitated the understanding of the relationships between concepts, their memorization and learning of the subject, as has been seen in other studies corresponding to General Biochemistry as well (Pérez-Parallé, 2023). In addition, the flipped classroom and Problem-Based Learning strategies, despite having been implemented less frequently, were well received by the students of the subject, in line with similar findings in other studies that have used these teaching strategies in General Biochemistry (De Souza Nascimento, 2022; Morales, 2018). In this context, the present research provides the opportunity to classify teaching and learning strategies according to the CS that we seek in our students, which can be evaluated by our system.

On the other hand, it is crucial to recognize that the interpretation of the results of this study must be carried out with caution, given the inherent limitation of both the lack of generalization due to the non-probability sampling technique and the moderate sample size as well as the type of assessment that was used in the research. Considering the latter, single-choice questions quizzes would not necessarily measure higher order skills such as critical thinking, creativity, decision making, among others. However, we consider that this research represents a significant step in terms of innovation in academic management and opens new opportunities for exploration in future research. A valuable line of work would be to extend the implementation of this tool to a greater number of courses and teachers, allowing the consistency of the results to be evaluated. Ultimately, we maintain that this EAS constitutes a significant contribution, and we envision its potential applicability in other HEIs with the purpose of strengthening academic management and improving the quality of teaching.

5 Conclusion

In summary, the evaluation of the implementation of the EAS in CS reveals its functionality and applicability in various academic contexts. The versatility of the EAS, supported by the ease of using computer tools to analyse the performance of different CS in real time, emerges as a valuable assistance for collaborative work and decision-making by the academic team in relation to their teaching practices, thus contributing to the continuous improvement of teaching and learning strategies.

Academic results support this approach, evidencing an increase in the grades of students who used the EAS, suggesting a positive impact on the learning of CS in a highly complex subject compared to those without access to the EAS. This study highlights the relevance of this system as a key tool for academic management, highlighting its potential contribution to improving student performance and, therefore, educational quality. Furthermore, the pedagogical implications emerge clearly, since the EAS allows teachers to specifically address the cognitive needs of students, providing continuous support and adapted pedagogical strategies, closing the gap between different academic levels. The combination of a functional tool and positive results supports the effectiveness and importance of EAS in the context of higher education.