skip to main content
research-article
Open Access

Effectiveness of Video-based Training for Face-to-face Communication Skills of Software Engineers: Evidence from a Three-year Study

Published:11 December 2023Publication History

Skip Abstract Section

Abstract

Objectives. Communication skills are crucial for effective software development teams, but those skills are difficult to teach. The goal of our project is to evaluate the effectiveness of teaching face-to-face communication skills using AVW-Space, a platform for video-based learning that provides personalized nudges to support student's engagement during video watching.

Participants. The participants in our study are second-year software engineering students. The study was conducted over three years, with students enrolled in a semester-long project course.

Study Method. We performed a quasi-experimental study over three years to teach face-to-face communication using AVW-Space, a video-based learning platform. We present the instance of AVW-Space we developed to teach face-to-face communication. Participants watched and commented on 10 videos and later commented on the recording of their own team meeting. In 2020, the participants (n = 50) did not receive nudges, and we use the data collected that year as control. In 2021 (n = 49) and 2022 (n = 48), nudges were provided adaptively to encourage students to write more and higher-quality comments.

Findings. The findings from the study show the effectiveness of nudges. We found significant differences in engagement when nudges were provided. Furthermore, there is a causal effect of nudges on the interaction time, the total number of comments written, and the number of high-quality comments, as well as on learning. Finally, participants exposed to nudges reported higher perceived learning.

Conclusions. Our research shows the effect of nudges on student engagement and learning while using the instance of AVW-Space for teaching face-to-face communication skills. Future work will explore other soft skills, as well as providing explanations for the decisions made by AVW-Space.

Skip 1INTRODUCTION Section

1 INTRODUCTION

1.1 Problem

Software engineers need not only technical skills (e.g., about programming and technologies) but also soft skills that enable effective team work and communication with different stakeholders in a project. For example, communication skills are crucial in software engineering (SE) to elicit and share information with various stakeholders such as other developers and end users [53]. Face-to-face meetings are especially important as they allow team members to share information and make decisions [19]. Accreditation bodies and universities acknowledge the importance of soft skills [26, 51]. However, teaching these skills is time-consuming, requires hands-on exercises, and is therefore not easily scalable [3, 25]. SE education predominantly fosters soft skills training in group projects [58], since exercising soft skills needs real project work with diverse teams and constant feedback and guidance from instructors.

1.2 Review of Relevant Scholarship

1.2.1 Teaching Soft Skills in Software Engineering Education and Beyond.

Soft skills can be practiced in classrooms using debates, role playing, and demonstrations, as well as in case studies, field visits, and mock interviews [64]. One common approach for teaching soft skills in software engineering education is to develop dedicated courses [2, 49], although students do not always appreciate such courses [57]. Similarly, frequently used approach in software engineering education is project-based learning (PBL) [12], where students work as a team to build a software system [34] in an educational setting. PBL allows students to practice soft skills [51, 40]. Recent work has explored how soft skills such as diversity impact student performance in team projects [29].

Another approach to teaching soft skills is having students record themselves while using particular soft skills. For example, outside software engineering, Rehear et al. [55] used role-play videos to provide feedback to first-year dentistry students on their communication skills and how to improve them. Although such approaches can be effective, they take significant resources in terms of teaching time. Software engineering and computing courses often contain assessment items that require soft skills, but there is no explicit teaching of such skills due to already full curricula.

There have also been approaches to teaching soft skills using game-based learning, in both physical and virtual spaces. Game-based learning (GBL) uses various gaming technologies to create entertaining environments to promote learning [62]. GBL approaches for teaching soft skills range from using physical environments such as board games, e.g., [59], and educational escape rooms [4] to digitized environments. For example, Clark and colleagues [13] describe educational escape rooms in which students work in teams to solve a problem (disarming a bomb) by solving multiple tests required to release an engineer hostage, with the main goal being improving communication, leadership, and teamwork skills. Digitalized environments include simulations that allow students to practice certain task while at the same time practicing their soft skills [28]. The popularity of computer games gave rise to serious games, i.e., games primarily designed for learning rather than entertainment [22]. McGowan and colleagues [42] describe Compete!, a single player two-dimensional serious game that requires final-year university students to use several soft skills (creative problem solving, communication, stress management, and teamwork) while working on sustainability problems. The authors also point out the risks involved with serious games, as they are often oversimplifications of real-life situations, thus making soft skills difficult to translate to the real world. Furthermore, the lack of engagement can lead to shallow learning.

A way to overcome challenges related to teaching soft skills is adopting Video-Based Learning (VBL). Previous studies show that VBL is appropriate for teaching soft skills [3, 15, 17, 20]. Watching videos enables learners to recall their past experiences and also to see other people using soft skills in various situations, thus facilitating reflection. We discuss the use of VBL for teaching soft skills further in Section 1.2.3.

1.2.2 Engagement.

It is widely acknowledged that passive learning is inferior to learning actively. In educational research, active learning is defined as a set of activities that the student performs on top of passive listening [5]. Active learning is student centered, as opposed to teacher-centered classroom education. In active learning, the role of the teacher becomes that of a facilitator rather than the provider of information. Various types of activities can be used to engage students actively: asking questions, solving problems, debating with peers, think-pair-share exercises, case studies, peer teaching, peer review, and others. In active learning, students perform activities and at the same time reflect on their knowledge, learn from each other and improve their critical thinking skills, group work, and other soft skills.

Our research is based on the Interactive-Constructive-Active-Passive (ICAP) theory of cognitive engagement [9, 10], which describes engagement based on the overt behaviors students enact while learning. ICAP identifies four categories of overt behaviors with decreasing level of engagement: Interactive (I), Constructive (C), Active (A), and Passive (P). The theory states that the more engaged students are, the more they learn (I > C > A > P). Each learning mode has its own characteristic behaviors. Examples of the passive mode include listening to a lecture, watching a video, or reading a text without performing any other activity. Such behaviors generally result in storing the newly received information, the outcomes of which would be that the student can recall that information. In the Active mode, students might be taking verbatim notes while being in a lecture or watching a video; such behaviors help integrate the new information with existing knowledge. The constructive mode goes further, as the student is inferring relationships with existing knowledge. An example of the constructive mode for a student who is attending a lecture and taking notes is when the student draws concept maps, self-explaining the material to themselves or drawing analogies, thus enabling transfer. In the constructive mode, the student is generating new information or makes associations between the lecture/video content and their own experiences. The characteristic feature of the interactive mode is dialoguing: The student may ask for clarifications, comparing the new information, discussing material with peers or teachers, so that knowledge is co-created. For the interactive mode, all learners participating in the dialogue contribute to the knowledge creation. The interactive mode also includes specific types of interactions between learners and computer-based learning environment, assuming that there is turn taking and what is discussed is generating new information. Chi and Wiley [9, 10] admit that overt behaviors are not a perfect indicator of how the student is learning; the student may be just listening to a lecture, without any overt behaviors, but internally could be self-explaining, making analogies or reflecting on their knowledge, which means that the student is engaging with the material in a constructive way. However, overt behaviors (or lack of them) are much easier to observe and serve as a good proxy for classifying learning situations. Chi and Wiley [9] also state that the boundaries between the four learning modes are blurry and that the student may engage in multiple learning modes in one session.

The ICAP theory has been validated empirically in numerous studies [9, 10]. The results of the prior studies with AVW-Space were in line with the ICAP theory [13, 47], as discussed in the following section.

1.2.3 Supporting Engagement in VBL.

In online learning, videos are often the primary source of information but require students to engage with presented material to learn. Mariachi et al. [44] note the similarity between watching videos and attending a lecture: In both cases, the student needs to be attentive, think about concepts being discussed, and integrate the newly acquired knowledge with existing knowledge. Problems identified with VBL include limited interactivity with videos (including difficulties with navigating and searching), the lack of human interaction, personalization, assessment, and feedback [8, 14].

Just passively watching videos is not enough for learning [35, 44]. Deep engagement with videos can be supported in several ways, starting with the careful design of videos themselves [6, 30] to lower the cognitive load during VBL, by using segmenting and signaling to highlight important parts of the video. Significant work has been done on increasing engagement with videos, by adding annotation tools, quizzes, examples, and interactive exercises [35, 36, 52, 63]. For example, Kovacs [36] found that students engage heavily with quizzes, as they see the benefit in testing themselves and receiving immediate feedback. The author also found that in-video quizzes can influence students’ video-viewing behavior, with some students focusing predominantly on the quizzes and seeking answers to the questions in preceding sections. Mariachi et al. [44] compared a group of learners who passively watched videos to two groups using active learning strategies: video annotations and in-video prompts. They found that students who annotated videos had higher self-efficacy and higher learning. These kinds of approaches result in improved engagement and learning but require substantial effort by teachers.

Data-driven approaches using interaction traces from VBL have been proposed to improve techniques for video navigation, such as visualizations of collective navigation traces, dynamic timeline scrubbing, and enhanced in-video searches [8]. There are also approaches using students’ ratings, annotations and contributions to forums to support social navigation and collaborative learning [8, 54]. Hyper video players provide advanced controls for navigation through multiple videos, links to supplementary material and support for collaboration. For example, Dodson et al. [23] present a study with the Vide hypervideo player, in which students could watch lecture recordings. Vide provides visual representation of the video content in the form of a filmstrip, as well as the video transcript. The student can take notes and highlight parts of the video transcript. Vide also provides histograms below videos, displaying how much time the student spent on various parts of the video. The student can navigate through the video by using the filmstrip or transcript. Note Struct [41] prompts users to take notes and later engages learners in reflecting and summarizing their notes. Durrell [14] extracts concepts from videos and organizes them in knowledge graphs, thus allowing students to explore the video content more effectively. Other hypermedia players provide advanced control for navigation through multiple videos, links to supplementary material, and support for collaboration [7].

Active Video Watching (AVW) was suggested as a VBL approach that encourages reflective learning [16, 21, 39, 45, 61]. The AVW-Space platform implements AVW by allowing instructors to embed YouTube videos for students to watch and comment on. The platform supports reflection during interactive note taking by requiring students to specify one of the teacher-specified aspects to each comment [21]. The aspects focus students’ attention on their previous experience or thinking about the future performance. The platform also supports social learning through rating of comments written by other students in the class.

Early studies with AVW-Space show that students who write comments and rate comments written by other students increase their knowledge. However, there was a substantial group of students who do not engage with videos actively [45]. Therefore, nudges were added to AVW-Space to increase engagement [20, 21]. Nudges are used to encourage students to write comments, while at the same time preserving the student's freedom to ignore nudges and use videos in their preferred way.

There are two types of nudges in AVW-Space: reminder nudges and quality nudges. The goal of reminder nudges is to encourage students to write comments, and such nudges are given to students who passively watch videos [20, 46]. The other type of nudges are quality nudges. AVW-Space automatically classifies comments as students write them into three categories: low quality (brief, vague, or off-topic comments), medium quality (comments that repeat the video content), and high-quality comments (i.e., comments with elaboration, reflection or planning for the future) [47]. Quality nudges are given when students write low-quality comments and provide examples of high-quality comments or encourage students to write about a particular topic. These nudges also encourage students who have written high-quality nudges to continue writing them. Visualizations were also added to AVW-Space, to provide information about the student's performance on different activities to help them reflect on their learning [48]. The results of the studies show that the percentage of constructive students was higher when nudges and visualizations were provided to students.

While video-based learning has been used to teach programming skills (e.g., by providing tutorial videos to students or recording videos for later viewing), video-based learning of soft skills is not common in software engineering.

1.3 Aims and Research Questions

Prior research on AVW-Space focused on presentation skills as the target soft skill [21, 47]. The research reported in this article investigates the use of AVW-Space to teach a new soft skill: communication in face-to-face software development meetings. Face-to-face meeting communication skills are particularly relevant for software engineering practitioners [27], since practitioners spend a significant amount of their time meeting and working with others (rather than sitting down to write code on their own). For example, software engineers spend time in planning meetings, working meetings with the whole team or parts of the team, review meetings, or one-on-one's with teammates, customers, or managers (and collaboration, psychological safety, transparency and accountability strengthen the team's ability to produce quality software) [1]. Please note that AVW-Space is freely available to teachers wanting to develop spaces for their classes (https://www.canterbury.ac.nz/engineering/schools/csse/research/ictg/avw-space/ ).

We developed a new instance of AVW-Space for this soft skill and conducted a three-year long study in the second-year software engineering project course at the University of Canterbury, New Zealand. Based on a quasi-experimental design, we used the year 2020 as control, with AVW-Space providing just support for commenting on videos and rating comments written by peers. In 2021 and 2022, students had full support, with AVW-Space providing personalized nudges and visualizations based on students’ behavior. In addition to providing YouTube videos on the target skill, we also recorded team meetings and provided the recordings in AVW-Space for students to review. Our study addresses the following research questions:

RQ1: What are the differences in student engagement and learning in the instances of the course with or without nudges?

Since previous research with AVW-Space on presentation skills shows that nudges were effective in increasing students’ engagement and learning, we were interested in evaluating whether similar effects would be found when teaching a different soft skill, face-to-face communication skills.

RQ2: Is there a causal effect of nudges on students’ engagement and learning?

We wanted to determine explicitly the effect of nudges on engagement and learning, by conducting causal modeling.

RQ3: What kind of perceptions do students have on usefulness of the recordings of team meetings?

Since students work in teams in the chosen SE course, we wanted to determine whether students would be interested in watching their own team meeting, as well as finding out what kind of reflections and learning result from such activity.

RQ4: What are students’ perceptions of learning in AVW-Space?

We were interested in finding the students’ perception of the provided online training, as well as to investigate whether there are any differences in students’ perceptions when nudges were or were not provided.

Skip 2METHOD Section

2 METHOD

We implemented a new instance of AVW-Space for teaching face-to-face communication skills in software development meetings and integrated this online training into a second-year semester-long SE project course at the University of Canterbury in 2020–2022. We obtained approval for the research from our institution's Human Ethics committee. In the course, the students worked in teams of four to six, had weekly face-to-face meetings, and received a short lecture on what to do before, during and after each meeting. In addition, the students were invited to use AVW-Space to learn face-to-face meeting communication skills. Students received 5% of the final grade if they completed all phases of the online training.

The study was done over the initial seven weeks of the course, as specified in Table 1. In week 1, the lecturers formed the groups and informed students about the online training. Students completed Survey 1 and then watched and commented on the videos in week 2. We selected 10 publicly available videos from YouTube (listed in Table 2) and integrated them into AVW-Space. In all three years, the same 10 videos were provided to the students. Six tutorial videos cover various elements of face-to-face communication, while four example videos present real (or acted) meetings. The students were instructed to watch and comment on tutorial videos first and later to use what they have learned to critique the example videos. The screenshot in Figure 1 shows a student writing a comment on a tutorial video. When writing a comment, the video is paused. The student can enter the text of the comment in the provided box and also needs to select one of the aspects (listed above the comment box) to specify what the comment is related to. Please note that we used the same aspects for writing comments based on tutorial videos as in the previous AVW-Space studies done on presentation skills [21]. Three aspects were defined to stimulate learners to reflect on their experience: “I am rather good at this,” “I didn't realize I wasn't doing it,” “I did/saw this in the past,” and the final aspect was “I like this point.” We defined the following aspects for writing comments on example videos: “verbal communication,” “giving feedback,” “receiving feedback,” “active listening,” and “meeting contributions,” corresponding to the concepts introduced in the tutorial videos. The screenshot in Figure 1 also shows a nudge (the red box titled “Make a comment”). This nudge was given as the student has watched the video before but has not written any comments.

Table 1.
WeekTasks
1Forming groups
2Survey 1; watch and comment on videos
3Review and rate comments written by other students in the class
4Meeting recording (Survey 2 in 2021/2022)
5Watch and comment on the team meeting
6Review and rate comments on the team meeting
7Survey 3

Table 1. Overview of the Study

Table 2.
VideoTitleLengthYouTube video id
Tutorial 1The 7 Cs of Communication2:46\(^{\prime}\)sYBw9-8eCuM
Tutorial 2Body Language2.45\(^{\prime}\)AqixzdpJL4U
Tutorial 3Giving feedback1.46\(^{\prime}\)Id_uG8Djdsc
Tutorial 4Improve your listening skills with active listening2.39\(^{\prime}\)t2z9mdX1j4A
Tutorial 5How Google builds the perfect team2.22\(^{\prime}\)v2PaZ8Nl2T4
Tutorial 6How to effectively contribute to team meetings4.05\(^{\prime}\)cKh75Po5Qsc
Example 1Bad Stand-up5.22\(^{\prime}\)zrmcl-pjmoc
Example 2The Daily Stand-up Meeting2.34\(^{\prime}\)VjNxQ-a-x2M
Example 3Examples of Good Meeting Communications Skills1.50\(^{\prime}\)czpBKC9Plh4
Example 4How NOT to run a meeting2.37\(^{\prime}\)F1qstYxrqn8

Table 2. The Videos Used in the Study

Fig. 1.

Fig. 1. Comment writing in AVW-Space.

In week 3, anonymized comments were made available to the whole class for rating. Students could rate comments using the rating categories previously used in [21]: “This is useful for me,” “I hadn't thought of this,” “I didn't notice this,” “I don't agree with this,” and “I like this point.

Following that, in week 4 students received Survey 2 in 2021 and 2022 (please note that in 2020 there was no such survey). Additionally, one meeting of each group was recorded and added to AVW-Space. Only members of the team could watch their own team meeting. Students were asked to watch and comment on their own meeting in week 5 and then rate each others’ comments in week 6. In week 7, students were asked to complete the final survey (referred to as Survey 3).

2.1 Participant Characteristics

Table 3 presents demographic data about our participants. The class size is shown in the Course column, while the next column shows the number of students who completed Survey 1 (2020: 50 students, 2021: 49 students, 2022: 48 students). There were no students who repeated the course. In all three years, most of the participants were male, in the 18–23 age group and were native English speakers, which is typical for SE classes at our institution. Most participants had no formal training on communication in face-to-face meetings. Those participants who had formal training got it in high school or University or received training at community/volunteer group. There were also three participants who received professional development training. Most participants had no software engineering experience outside their University courses, but all passed introductory programming courses and an introductory course to software engineering (including software engineering principles, practices, processes and tools, and object-oriented programming and design, as well as a small software project completed in pairs). There were also two Likert questions on how often the participants watched YouTube (the YouTube column) and how often they watched YouTube videos for learning (the YT4L column). For these two questions, the options were 0 (never), 1 (occasionally), 2 (once a month), 3 (every week), or 4 (every day). The averages and standard deviations presented in Table 3 show the high usage of YouTube; none of the participants responded with 0 for those questions.

Table 3.
YearCoursenMale18–23 ageNative EnglishNo trainingNo SE experienceYouTubemean (SD)YT4Lmean (SD)
2020565084%98%78%80%74%3.38 (.95)2.26 (.99)
2021504980%98%88%70%80%3.53 (.82)2.73 (1.06)
2022484867%92%81%75%81%3.63 (.64)2.73 (.82)

Table 3. Demographic Data

2.2 Instrumentation and Data Collection

AVW-Space automatically logs all actions students perform while interacting with the platform. In addition to the logs, we collected data via three surveys (administered within AVW-Space). The surveys are provided in the Appendix.

Survey 1 was identical across the years and contained demographic questions and questions on training and experiences with face-to-face meetings. In addition, there was a timed question asking students to write everything they know about face-to-face meetings communication skills in three minutes. Surveys 2 and 3 also contained the same question about students’ knowledge of face-to-face communication in meetings.

Survey 3 included three instruments, (1) Cognitive-Affective-Psychomotor (CAP) perceived learning scale [56], (2) NASA-TLX cognitive load scale [31], and (3) Technology Acceptance Model (TAM) scale [18], to capture students' overall perception of AVW-Space. Additionally, Survey 3 contained open-ended questions on students’ perceptions of various features of AVW-Space.

The CAP perception of learning scale [56] was used to capture students’ estimates of learning from their experience with AVW-Space. The scale consists of nine items organized into three dimensions: Cognitive, Affective, and Psychomotor. These items were presented to students in Survey 3. We slightly modified the text of each item to suit our experimental design, as specified in Table 4. We did not use the second question, as our study was much shorter than a full course. The students responded to items using a Likert scale from 0 (strongly disagree) to 6 (strongly agree). Please note that item 7 is negatively worded, and hence we have reversed the score for that item.

Table 4.
Original TextDimensionModified text
1I can organize course material into a logical structureCognitiveI can summarize what I have learnt in AVW-Space for someone who has not learned from AVW-Space.
2I cannot produce a course study guide for future students.CognitiveN/A
3I am able to use physical skills learned in this course outside of classPsychomotorI am able to use the effective meeting participation concepts I learnt in AVW-Space in my future meetings.
4I have changed my attitudes about the course subject matter as a result of this course.AffectiveI have changed my attitudes about effective meeting participation as a result of AVW-Space.
5I can intelligently critique the texts used in this course.CognitiveI can assess the quality of face-to-face communication in the example videos used in this training.
6I feel more self-reliant as the result of the content learned in this course.AffectiveI feel more confident in my face-to-face communication skills in meetings as a result of AVW-Space.
7I have not expanded my physical skills as a result of this course.PsychomotorI have not expanded my knowledge of effective meeting participation concepts as a result of AVW-Space.
8I can demonstrate to others the physical skills learned in this coursePsychomotorI can demonstrate to others the effective meeting participation concepts I learnt in AVW-Space.
9I feel that I am a more sophisticated thinker as a result of this courseAffectiveI feel that I am a more effective meeting participant as a result of AVW-Space.

Table 4. The Items of the CAP Perceived Learning Scale

The NASA-TLX task load index is a multidimensional scale assessing participants’ workload [31]. The scale consists of six dimensions: mental, physical demands, temporal demands, frustration, effort, and performance. Due to the nature of the tasks in our study, we have not used the physical and temporal dimensions.

The TAM is a validated instrument that measures perceived usefulness and perceived ease of use of a particular computer system [18]. Perceived usefulness is defined as “the degree to which a person believes that using a particular system would enhance his or her job performance,” while the perceived ease of use is defined as “the degree to which a person believes that using a particular system would be free of effort” [18]. These two factors determine the overall acceptance of the system and have been shown to be significantly correlated with both self-reported current or predicted future use of studied computer systems [18].

2.3 Measuring Students’ Knowledge

The conceptual knowledge questions, which appeared in all three surveys, were used as a proxy for the test of the students’ conceptual knowledge of the target skill. The instructions given to students were as follows: “Write all words/phrases (one per line) that you associate with effective communication in software engineering meetings.”

Students’ responses were marked automatically, using text analytics methods and the vocabulary of the target skill developed by the authors. To develop the vocabulary, we first generated the corpus from the transcripts of tutorial videos. The tokens were extracted and lemmatized after lowercasing texts and removing punctuation and stop words. Next, using collocation statistics [43] implemented in the Phrases module of the Genism library 7, words and bigram phrases that appeared more than twice in the corpus were extracted automatically. In addition to collocation statistics, following the work of [50], we extracted the most relevant and similar words using Global Vectors Word Representation to represent each word. A total of 225 words and phrases were extracted along with 225 synonyms. Three independent expert coders verified whether the extracted words should be in the domain vocabulary. Each word was coded with 1 or 0, depending on whether a particular word was relevant. The pairwise Cohen's Kappa test revealed moderate (0.55), substantial (0.61) and nearly perfect (0.91) agreement between the coders [38]. Fleiss’ kappa κ = 0.69 also showed substantial inter-coder agreement [38], but Krippendorff's alpha coefficient α = 0.31 was low [37]. The three coders then reviewed their codes with a fourth coder to resolve differences, using the majority vote to achieve agreement (NB all coders are authors of this article). As the result, 11 words were excluded, and 10 new words were added. The conceptual knowledge score is the number of vocabulary concepts that appear in a student's response.

2.4 Analytic Strategy

We used non-parametric statistical data analyses when data were not normally distributed. Repeated-measures ANOVA was used to compare conceptual knowledge scores from the three surveys and across three years. Pairwise comparisons were performed with Bonferroni correction. All analyses were performed using IBM SPSS version 28.0.1.0 unless otherwise specified. For causal modeling, we used thinkCausal (apsta.shinyapps.io/thinkCausal/).

Skip 3RESULTS Section

3 RESULTS

3.1 Analyzing Students’ Engagement and Learning

We removed the data about students who completed Survey 1 but had watched no videos from further analyses. To answer RQ1, we analyzed the collected data to generate measures of how students interacted with AVW-Space (Table 5). There are some measures that are statistically significantly different. For example, the number of sessions students had in AVW-Space and the number of distinct days on which they interacted with the platform are different in the three years, with 2021 and 2022 numbers being significantly higher in comparison to 2020. The Videos row in the table reports the average number of tutorial/example videos watched; please note that some students watched the same video multiple times, which explains why the reported numbers are higher than 10 (the total number of videos). The number of videos watched is significantly higher in 2021, although we do not have an explanation for such an increase.

Table 5.
202020212022Significance
Participants474947
Sessions5.55 (2.06)6.67 (3.83)9.00 (9.01)F = 4.39, p < .05
Days4.77 (1.67)5.49 (2.60)6.83 (3.05)F = 8.19, p < .001
Videos11.77 (3.54)44.33 (18.08)13.85 (3.95)F = 132.35, p < .001
Comments9.62 (13.21)29.55 (17.81)29.81 (18.54)F = 22.76, p < .001
NudgesN/A44.33 (18.08)45.94 (18.28)
HQ comments1.87 (2.58)9.10 (6.30)9.13 (6.25)F = 28.89, p < .001
Ratings104.47 (154.16)145.24 (280.88)58.77 (62.35)no
CK16.77 (4.57)7.15 (4.62)8.77 (4.01)no
CK2N/A8.59 (4.18), n = 4411.71 (5.46), n = 41F = 8.81, p < .01
CK39.90 (6.91), n = 3010.70 (4.14), n = 1010.20 (4.85), n = 40no

Table 5. Interaction with AVW-Space: Mean (SD)

More importantly, the number of comments written on the tutorial/example videos is significantly higher in 2021 and 2022 than in 2020, when there were no nudges and visualizations. There was no significant difference between the number of nudges students received in 2021 and 2022, which is to be expected as the experimental set up was identical in those two years.

The High Quality (HQ) comments column reports the number of high-quality comments written by students. Such comments show self-reflection, critical thinking about the video content, or planning for future performance. The categorization of comments is done automatically by AVW-Space immediately after a comment is written. AVW-Space uses the Machine Learning classifiers for categorizing comments as explained in [47]. We found a significant difference on the number of high-quality comments, with 2020 students writing significantly fewer HQ comments in comparison to 2021 and 2022. This finding is not surprising as nudges were designed to motivate students to write better quality comments [47].

The last three rows of Table 5 report the conceptual knowledge (CK) scores from Surveys 1, 2, and 3 (CK1, CK2, and CK3, respectively). There was no significant difference between the three classes on the initial (CK1) scores. In all three years, the final score (CK3) is significantly higher than CK1. In 2020, we did not have Survey 2, and it was not possible to see whether the increase in the conceptual knowledge score is due to learning from the initial 10 videos or due to the learning related to the recording of the team meeting. For that reason, we introduced Survey 2 in 2021 and 2022. Unfortunately, the COVID-19 lockdown overlapped with the last phase of the study in 2021, and the number of participants who completed that last phase was only 10. Therefore, we analyzed deeper the data from 2022. The repeated-measures ANOVA for the conceptual knowledge scores for the 2022 data is significant (F = 4.53, p < 0.02, partial eta squared = .104). The pairwise comparisons using the Bonferroni correction show a significant difference between CK1 and CK2 (p = 0.03), but no other significant differences. We found no significant differences between the CK3 scores of the three groups of participants.

Another way to analyze students’ engagement is to classify them post hoc based on their overt behaviors, using the ICAP framework [9]. As discussed previously, ICAP identifies four categories with decreasing level of engagement: Interactive (I), Constructive (C), Active (A), and Passive (P). The ICAP framework states that the more engaged students are, the more they will learn (I > C > A > P). Interactive mode is irrelevant for our study, as AVW-Space does not support direct interaction between students. Students classified as Passive are those who only watched videos and have not written any comments. We distinguish Constructive from Active students by observing the number of high-quality comments on tutorial videos, in line with [21, 47]. The median number of high-quality comments on tutorial videos in 2020 was 1. Therefore, active students are those who wrote at most one high-quality comments, while constructive students wrote two or more such comments.

Table 6 presents the distribution of students over the three ICAP categories. In 2020, there were 16 students who only watched videos, without writing any comments, while in 2021 and 2022 all students wrote comments. Similar effect of nudges has been observed in previous studies on presentation skills in AVW-Space [21, 47]. The number of Active students also reduced from 2020 to 2021/2022, when a vast majority of students were classified as Constructive. A chi-square test of homogeneity between the three years and ICAP categories revealed a significant difference (chi-square = 49.83, p < 0.001) with the effect size (Phi) of .59 (p < 0.001). A post hoc analysis with the Bonferroni corrections revealed a significant increase of the number of Constructive students and a significant decrease of Passive students in 2021/2022 (p < 0.0056).

Table 6.
YearSummarynPassiveActiveConstructive
2020No nudges4716922
2021Nudges + Visualizations490346
2022Nudges + Visualizations470146

Table 6. The Number of Students in Various ICAP Categories

To summarize our answer to RQ1, we found significant differences comparing student behavior in 2020 to 2021 and 2022, when nudges and visualizations were provided. In the latter case, a vast majority of students interacted more with AVW-Space and exhibited constructive behavior.

3.2 Causal Modeling

To identify causal effects, it is necessary to compare groups of participants who had or did not have a particular treatment, assuming random allocation of participants to groups. In real situations such as our study, it is not possible to control all factors that can potentially affect the outcome. Additionally, in educational studies it is unethical to withhold treatment from some students. Instead, modeling the relationship among outcome variables, covariates, and treatment is possible using the Bayesian Additive Regression Trees (BART) machine learning algorithm [11, 32].

To determine the effect of nudges and visualizations on engagement and learning (RQ2), we conducted causal modeling using thinkCausal (apsta.shinyapps.io/thinkCausal/) [33], which uses BART as a foundation. This tool estimates the average treatment effects and is suitable for observational studies. Previous research shows that BART has excellent performance in causal inference [24, 32]. The tool is user friendly and scaffolds researchers through the data analytic process.

We grouped students from 2021 and 2022 together, as the experimental setup was identical in those two years. The results show that receiving nudges and visualizations led to an increase of 17.03 comments and an increase of 5.04 high-quality comments compared to what would have happened if students did not receive nudges (Figures 2 and 3, respectively). Similar results were obtained when we modeled other measures: There was an increase of 1.92 sessions when nudges/visualizations were provided. Providing visualizations and nudges also led to an increase of 0.52 conceptual knowledge scores CK3 in comparison to not providing them.

Fig. 2.

Fig. 2. Causal modeling of the number of comments made.

Fig. 3.

Fig. 3. Causal modeling of the number of high-quality comments made.

These results provide the answer to our second question (RQ2): providing nudges and visualization cause increased engagement, measured by the number of sessions with AVW-Space. There is also a causal effect of nudges/visualizations on the number of comments written, and more importantly, on the number of high-quality comments. We also found a causal effect on learning, as measured by our conceptual knowledge scores.

3.3 Students’ Meetings

In all three years, in the fifth week of the study, students watched and commented on a recording of their own team meeting, while in the following week they rated each others’ comments. This allowed us to answer RQ3. Please note that nudges were not given in 2021/2022 while watching the team video. The Watched row in Table 7 specifies the number of participants who watched their team video. That number is much lower in 2021, and we believe the reason for that is the COVID-19 lockdown, which overlapped with that phase of the study. Table 7 reports the average number of comments on the team video. The majority of students watched their team video in 2020 and in 2022. Although we have previously seen a significant difference in the number of comments on the tutorial/example videos, there is no significant difference on the number of comments on the team video for the three years. This illustrates how interesting students found this part of the study.

Table 7.
202020212022
Participants (total)474947
Watched432543
Team comments5.05 (5.65)4.79 (2.49)5.54 (3.63)

Table 7. Engagement with the Team Videos

Students’ comments on the team meeting were of different types. Some comments pointed out positive/negative aspects of the team meeting, with students using what they learnt from tutorial videos to critique their own meeting. There were also comments that focused on opportunities for improving their meetings, such as “Could have better followed an agenda - meeting seems less purposeful.” Students also reflected on the meeting overall (“I think we had a very effective meeting. We stayed on topic, stuck to the agenda, and left with a good understanding of our next steps”) or reflected on their own behavior, e.g., “For myself, if I was to have a meeting with someone and they had the same body language as I did in this meeting (at least in the first 10 minutes), I would get an impression that they are not interested, disengaged, or not paying attention.” “I might be over contributing while not leaving space for others to contribute,” and “I need to construct/deliver my ideas clearer.”

Survey 3 included two questions asking participants about the usefulness of commenting on the team video and rating the comments. We classified the provided feedback manually, based on whether those activities were reported useful or not. The majority of students believed that watching the team video was beneficial (97% in 2020, 70% in 2021, and 88% in 2022). Most responses were about the usefulness of watching themselves, for example “You can step into an outsiders shoes to view your groups interaction.” Some feedback expounds it further, pointing out the benefit of watching it from another point-of-view: “Very useful as an outsider's perspective allowed me to see things (good and bad) that I did in the meeting more clearly” and “You're able to see your meetings from an outside perspective, which enables you to see things you might've missed in the moment such as team members not contributing as much or not paying attention….”  The responses were reflective, focusing on their own or team member's shortcomings or mistakes, helping them identify areas for improvement: “Know my shortcomings and Know how to improve me next time,” “Very useful, it provides me with many of my weakness and my improvements,” and “I could communicate to the team what I believe needed to be worked on, and what we were doing well.”

Most students also found reviewing/rating comments on the team video useful (79% in 2020, 60% in 2021, and 78% in 2022). Similarly to the feedback on watching/commenting on the team video, students pointed out that this activity helped them know the perspective of their fellow team members: “Understanding your team members views, feelings, and opinions, leading to better communication and teamwork” and “It not only allowed me to see my thoughts on our team's communication, but also showed me what the rest of my team thought about how we communicated as well….” Some responses pointed out that the anonymized method of reviewing and rating comments was helpful to get or give constructive criticism from/to the team members: “It was useful to get feedback from my peers in an anonymised forum, which allowed us to make comments that might have been taken defensively if made directly,” “Get some new opinions with no bias on how they think our team communicates and we would receive more constructive criticism,” and “Forced each team member to consider how others feel and facilitates team growth.”

To summarize our answer to the third research question (RQ3), we found that the participants were very much interested in watching their own meetings, and reported that they learnt from that activity. One student even pointed out that the feature to watch their own team video was a vital component of AVW-Space, stating that “It's a great way to observe how your meetings are structured and how everyone interacts from a third person perspective. I think this aspect of the AVW-Space was its strongest.”

3.4 Participants’ Perceptions

Our fourth research question focuses on students’ perceptions of the provided training. We address this question in the following subsections, starting with the perceptions of learning as measured by the modified CAP scale. We then analyze the TAM scores and present the results for the ease of use and perceived usability. The third subsection reports the findings from the NASA-TLX index, while the following subsection presents observations from the last two weeks of the study, when students were watching their own meeting.

3.4.1 Perceptions of Learning.

As discussed in Section 2.2, we modified the CAP perception of learning scale [56] to capture students’ estimates of learning from their experience with AVW-Space. We first assessed the Cronbach alpha estimate of internal consistency of the three dimensions of the CAP perceived learning scale. The values for the Conceptual (α = 0.71) and Affective (α = 0.61) dimensions were higher than the value for the Psychomotor dimension (α = 0.29). We do not report the alpha value for the whole scale as the scale is not unidimensional [60]. Table 8 reports the averages and standard deviations for the scores on the three dimensions, as well as the overall scores for the 2020 class (no nudges) and the combined 2021 and 2022 classes (when nudges were provided). The scores for dimensions as well as the overall scores are calculated by summing scores on relevant items [56]. The overall score therefore ranges from 0 to a maximum of 48, with higher scores representing higher perceptions of total learning. We performed the independent-samples Kruskal–Wallis test and found a significant difference on the overall score (H = 5.23, p = 0.02). There was no significant difference on the cognitive scores, but the scores on the affective (H = 8.49) and psychomotor (H = 9.21) dimensions were significantly different (p < 0.005 in both cases).

Table 8.
NudgesnCognitiveAffectivePsychomotorOverall score
No308.73 (2.18)11.10 (2.50)11.27 (1.64)31.10 (5.18)
Yes498.78 (2.37)12.69 (3.87)12.65 (3.25)34.12 (8.68)

Table 8. The Scores for the Modified CAP Perceived Learning Scale

3.4.2 Perceived Usability and Perceived Ease of Use.

We integrated 10 questions from TAM into Survey 3, to assess how much students are likely to use AVW-Space intentionally. Participants provided their responses using the Likert scale ranging from 1 (extremely likely) to 7 (extremely unlikely). The internal consistency was good, with Cronbach alpha values of 0.95 for perceived usefulness, and .85 for perceived ease of use. Table 9 reports the scores on the two dimensions for the two groups of participants. We found no significant differences on the TAM scores.

Table 9.
NudgesnPerceived usefulnessPerceived ease of use
No303.79 (1.47)3.53 (1.53)
Yes494.09 (1.40)3.53 (1.10)

Table 9. Scores for the Two TAM Dimensions

3.4.3 Cognitive Task Index.

Survey 3 contained four questions adapted from the NASA-TLX task load index on comment writing and four questions on comment rating, which allowed participants to specify their perceptions. For each type of activity, participants were asked to specify how mentally demanding it was (Demand), how hard they had to work (Effort), how discouraged, irritated, stressed, and annoyed they felt (Stress), and how successful they think they were while performing the activity (Performance). Participants provided their responses using the Likert scale ranging from 1 (Very low) to 20 (Very high). Table 10 presents the scores.

Table 10.
2020 (17)2021/2022 (49)
Comment Writing
DemandEffortStressPerformance7.88 (3.90)8.61 (3.62)
6.59 (4.02)8.27 (4.05)
6.41 (4.87)7.43 (5.28)
8.71 (4.57)11.73 (4.86)
Comment Rating
DemandEffortStressPerformance5.05 (3.25)6.69 (4.30)
4.29 (2.95)6.49 (4.62)
6.05 (4.47)6.92 (5.84)
9.65 (5.11)11.37 (4.52)

Table 10. Scores for the NASA-TLX Index for the Two Tasks

The scores from the 2021/2022 participants are generally higher than those of the 2020 students. The only significant difference was found on the Performance when writing comments (F = 5.05, p < 0.05, eta squared = .073), with the 2021/2022 participants reporting more confidence in their performance. For each of the questions, students could provide free-form feedback. Most of the feedback on writing comments stated it was easy to write comments, e.g., “I just wrote comments when I thought about something during the videos,” but students also pointed out the need to be attentive, e.g., “It wasn't too mentally demanding, but it did require constantly thinking about when you should add comments about something.” The participants stated rating comments was neither mentally demanding nor requiring a lot of effort but also noted that there were too many comments to be rated.

3.4.4 Perceptions of AVW-Space Features.

Survey 3 contained open-ended questions that asked participants to specify their opinions of AVW-Space in terms of its functionality. One question was on the usefulness of pausing a video to write a comment. The vast majority of students found this feature useful (96% in 2020, 90% in 2021 and 100% in 2022). Students reported that this feature allowed them to gather their thoughts and focus on writing their comments, for example “Gives you a chance to think and come up with a proper response rather than moving forward and potentially missing something because you were writing.”  Only two students, one from 2020 and the other from 2021, indicated that pausing the video was not helpful, with only one providing a reason for the answer (“not very, you lose track”).

When asked about the usefulness of specifying aspects when composing comments, there was an increase in the number of students who find aspects useful: 46% in 2020, 50% in 2021, and 59% in 2022. Common reasons in the 2022 responses suggest that aspects helped in reflecting on the comments: “It helps you reflect on the factors you identify in a personal way which helps to connect you with the situation in a neurological sense,” “I think that its useful to be able to relate the comment back to an idea,” and “It gets you to relate what's being discussed to yourself and provokes you to think harder about what aspects of communication are being discussed.” Interestingly, some students found that aspects are limiting but nevertheless find them useful: “It's a little bit useful but I feel as though there were not enough possible combinations” and “For the most part this was good, however, having those as options felt very limiting. Often none of them quite matched and I just had to pick the most relevant one.” Some students found aspects not useful, as they had other topics in mind or would prefer to use several aspects simultaneously.

In all three years, students appreciated the value of reviewing/rating comments made by peers (83% in 2020, 70% in 2021, and 83% in 2022). The most frequently cited reasons are about being able to see other points of view: “It is useful as you can see other people's perspective on the videos shown” and “You get to see what other people thought of the video and you might get to understand something you didn't before.” Some students also indicated that this activity enabled them to organize their thoughts about the video, and see the ideas or concepts that they might have missed, for example, “Extremely useful, allowed me to get other student's perspectives that I might have missed” and “You can visualize other people's viewpoints and better allow yourself to learn what you have not found.” The feedback was generally positive, showing that this feature enabled students to learn from other students and increased their understanding of the concepts discussed in the video. When replying to the question on the usefulness of rating categories when rating comments, many participants noted that they would like to see more rating categories, or even define the categories by themselves.

Skip 4DISCUSSION AND CONCLUSIONS Section

4 DISCUSSION AND CONCLUSIONS

We presented online training developed for face-to-face communication in software development meetings, and the results from three instances of the second-year software engineering course into which the training was embedded. Our quasi-experimental study allowed us to compare the interactions and learning of the 2020 students, when nudges were not used, to the 2021 and 2022 classes that received the nudges. The nudges AVW-Space provides were designed to encourage constructive behavior, characterized by a high level of engagement in terms of the overall comment writing and also with respect to writing high-quality comments [47].

The findings from the 2020 class are similar to the early studies with AVW-Space [20, 45], with 34% of participants passively watching tutorial and example videos. In contrast, there were no passive students in 2021/2022, when nudges were provided. We found significant differences on the various engagement measures between the classes, with 2021/2022 students interacting more with AVW-Space, writing more comments and also writing more high-quality comments. We have also found the causal effects of nudges on the number of sessions with AVW-Space, the number of comments overall, and the number of high-quality comments written, as well as the conceptual knowledge score of the target soft skill.

The responses on the modified CAP perceived learning scale show no differences for the Cognitive dimension, but there were significant differences on the scores for the affective and psychomotor dimensions, illustrating participants’ attitude towards the skill learnt and also confidence in being able to execute the skill. The analysis of the TAM responses found no significant differences, with all participants finding AVW-Space moderately useful and easy to use. We also found no significant difference on the NASA-TLX responses, with the only exception being that the treatment group reported higher scores for performance on comment writing. Generally, students acknowledge the usefulness of commenting on videos, especially their team video. Their responses also confirm that rating of comments is useful, as it enables social learning.

Our study shows that when nudges are given, engagement with tutorial/example videos and learning increase. Interestingly, there were no differences on student engagement with team videos showing how much students were interested in that activity. Watching videos of their own meetings enabled students to reflect on their own contribution to the meeting and how well the team communicated. Many students provided feedback on how useful it was to see their own meeting, which enabled the teams to grow and improve. We found that students felt overwhelmed by the number of comments available for rating, pointing to the need for intelligent filtering for individual students. In future work, we will explore the possibility of providing comments for rating in an adaptive way. One approach for that is to analyze a learner's comments to identify weaknesses in terms of concepts mentioned or the depth of reflection and then to identify relevant comments to present to that student.

Our article contributes to the literature by showing the effectiveness of active video watching in teaching a new soft skill: face-to-face communication in software development meetings. Furthermore, we show that nudges do cause increase in interactions with AVW-Space, especially an increase in the number of high-quality comments and learning. We acknowledge that our findings may not be specific to software engineering; however, the chosen soft skill was selected based on its relevance for practicing software engineers. Also, study participants were chosen based on the target audience of our learning platform—software engineering students.

A threat to validity is the effect of the COVID-19 pandemic on the results. These effects were relatively similar in the three years of our study in the country where the research was performed.

There are several limitations of the presented work, the first being the small population size, which is due to the size of the course in which the study was conducted. Additional research is needed to determine the effect of the online training on larger populations of students. We plan to conduct additional studies in the coming years in the same course, which will result in a higher number of participants. The second limitation is that the research was done with students from just one educational institution. Our findings are therefore contextualized and may not necessarily generalize to other tertiary institutions. In future work, we will repeat the study at other institutions, as well as with the industry professionals.

Another limitation is in the way we have assessed students’ knowledge. As stated earlier, we used one question in each survey as a proxy for assessing the students’ conceptual knowledge of the face-to-face communication skills. This is not ideal for several reasons. Answering surveys can be annoying to students and their responses do not necessarily represent their true knowledge. Furthermore, listing concepts related to the target soft skills does not necessarily mean that the students know how to perform the skill. Ideally, the extent of learning can be assessed by a human expert observing the students in real meetings before and after the training. This requires substantial resources and is not always practically possible. We are currently developing an instrument for self-reporting the ability to communicate face-to-face. However, the instrument is still under development and could not be used in our study. We plan to repeat the study when the instrument has been fully validated.

APPENDIX

Survey 1

(1)

What is your age?

  • 18–23

  • 24–29

  • 30–35

  • 36+

(2)

What is your gender?

  • Male

  • Female

  • Other gender

  • Prefer not to answer

(3)

What is your first language?

(4)

How much training have you had on communication in face-to-face meetings?

  • No training

  • Some training

  • Quite a bit

  • A lot

  • Extensive training

(5)

Select the type of training on communication in face-to-face meetings you have had:

  • Training at high school

  • Training at University

  • Training at community/volunteer group

  • Professional training

  • Other (Please specify)

(6)

Over the last year, how frequently would you attend face-to-face formal meetings with more than two people?

  • Never

  • Occasionally

  • Once a month

  • Every week

  • Every day

(7)

Please specify the type(s) of meetings you have had that involved more than two people:

  • Group assignment in high school

  • Group assignment in University

  • Meeting with lecturers

  • As part of an internship

  • Part-time job related to software development

  • Part-time job not related to software development

  • Other (Please specify)

(8)

How much experience do you have working in software development times outside the university?

  • None

  • Some experience (less than a week)

  • Quite a bit (a month)

  • A lot (several months)

  • Extensive experience (more than a year)

(9)

How often do you watch YouTube?

  • Never

  • Occasionally

  • Once a month

  • Every week

  • Every day

(10)

How often do you use YouTube for learning?

  • Never

  • Occasionally

  • Once a month

  • Every week

  • Every day

(11)

[You have max 3 minutes to answer] Write all words/phrases (one per line) that you associate with effective communication in software engineering meetings.

Survey 2

[You have max 3 minutes to answer] Write all words/phrases (one per line) that you associate with effective communication in software engineering meetings.

Survey 3

[You have max 3 minutes to answer] Write all words/phrases (one per line) that you associate with effective communication in software engineering meetings.

The following questions ask you to estimate how much you have learned from AVW-Space.

Please rate, on the scale of 1 (strongly disagree) to 7 (strongly agree), to what extent do you agree with each statement, where lower numbers reflect less agreement and higher numbers reflect more agreement.

I can summarize what I have learnt in AVW-Space for someone who has not learned from AVW-Space.
I am able to use the effective meeting participation concepts I learnt in AVW-Space in my future meetings.
I have changed my attitudes about effective meeting participation as a result of AVW-Space.
I can assess the quality of face-to-face communication in the example videos used in AVW-Space.
I feel more confident in my face-to-face communication skills in meetings as a result of AVW-Space.
I have not expanded my knowledge of effective meeting participation concepts as a result of AVW-Space.
I can demonstrate to others the effective meeting participation concepts I learnt in AVW-Space.
I feel that I am a more effective meeting participant as a result of AVW-Space

The following questions have a Likert scale from 1 (Very easy) to 20 (Very hard).

MENTAL DEMAND - Writing comments

How mentally demanding was to write comments on videos in AVW-Space?

For example, how much mental and perceptual activity was required - thinking, deciding, remembering, looking, searching?

EFFORT - Writing comments

How hard did you have to work (mentally and physically) to write comments on videos in AVW-Space?

FRUSTRATION - Writing comments

How discouraged, irritated, stressed and annoyed did you feel while writing comments on videos in AVW-Space?

PERFORMANCE - Writing comments

How successful do you think you were to identify useful points about effective meeting participation when commenting on videos in AVW-Space?

Based on your use of AVW-Space, what would be the usefulness of pausing a video to write a comment?

Based on your use of AVW-Space, what would be the usefulness of asking you to indicate what the comments refer to, e.g., for tutorials:  'I am rather good at this', 'I did/saw this in the past', 'I did not realise I was not doing this', 'I like this point'; 

     for examples:  'Verbal communication', 'Feedback', 'Active listening', 'Contribution' .

The following questions have a Likert scale from 1 (Very easy) to 20 (Very hard).

MENTAL DEMAND - Rating comments

How mentally demanding was to review and rate comments on videos in AVW-Space?

For example, how much mental and perceptual activity was required - thinking, deciding, remembering, looking, searching?

EFFORT - Rating comments

How hard did you have to work (mentally and physically) to review and rate comments on videos in AVW-Space?

FRUSTRATION - Rating comments

How discouraged, irritated, stressed and annoyed did you feel while reviewing and rating the comments on videos in AVW-Space?

PERFORMANCE - Rating comments

How successful do you think you were to identify useful points about effective meeting participation when reviewing and rating of comments made by others in AVW-Space?

In the second phase of the study, you experienced two additional features of AVW-Space:

 reviewing the comments on the videos made by other users of AVW-Space;

 rating the comments of other users;

Based on your use of the AVW-Space, what would be the usefulness of reviewing the comments on the videos made by other people?

Based on your use of AVW-Space, what would be the usefulness of rating the comments of your team members on the team meeting video, e.g., This is useful for me'; 'I hadn't thought of this'; 'I did not notice this'; 'I like this point'; 'I do not agree with this' 

The AVW-Space system is aimed at informal learning of soft skills (e.g., meeting communication, giving presentations, negotiating, managing teams) using selected videos.

The questions below ask how you perceive the usefulness of AVW-Space for informal learning of soft skills.

Extremely likely (1)Quite likely (2)Slightly likely (3)Neutral (4)Slightly unlikely (5)Quite unlikely (6)Extremely unlikely (7)

I think I would like to use AVW-Space frequently.

I would recommend AVW-Space to my friends.

Using AVW-Space would enable me to improve my soft skills quickly.

Using AVW-Space would improve my performance considering the development of soft skills.

Using AVW-Space would enhance my effectiveness when developing soft skills.

I would find AVW-Space useful in my studies/job.

I would find AVW-Space easy to do what I want it to do.

My interaction with AVW-Space would be clear and understandable.

I would find AVW-Space easy to use.

If I am provided the opportunity, I would continue to use AVW-Space for informal learning.

In the last phase of the study, you performed peer-assessment exercise

Based on the peer-assessment exercise, what would be the usefulness of watching and commenting on your team video?

Based on the peer-assessment exercise, what would be the usefulness of reviewing and rating the comments made by other team members on your team video?

What do you think is most exciting about AVW-Space?

What do you think is most disappointing about AVW-Space?

REFERENCES

  1. [1] Alami Adam and Krancher Oliver. 2022. How Scrum adds value to achieving software quality?. Empir. Softw. Eng. 27, 7 (2022), 165.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Andersson Christina and Logofatu Doina. 2018. Using cultural heterogeneity to improve soft skills in engineering and computer science education. In Global Engineering Education Conference. 191195.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Anthony Suzanne and Garner Benjamin. 2016. Teaching soft skills to business students: An analysis of multiple pedagogical methods. Bus Profess. Commun. Quart. 79, 3 (2016), 360370.Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Bezard Lilian, Debacq Marie, and Rosso Astrid. 2020. The carnivorous yoghurts: A “serious” escape game for stirring labs. Educ. Chem. Eng. 33 (2020), 18.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Bonwell Charles and Eison James. 1991. Active Learning: Creating Excitement in the Classroom. 1991 ASHE-ERIC Higher Education Reports. ERIC Clearinghouse on Higher Education, The George Washington University, DC.Google ScholarGoogle Scholar
  6. [6] Brame Cynthia J.. 2016. Effective educational videos: Principles and guidelines for maximizing student learning from video content. CBE—Life Sci. Educ. 15, 4 (2016), es6.Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Cattaneo Alberto, van der Meij Hans, and Sauli Florinda. 2018. An empirical test of three instructional scenarios for hypervideo use in a vocational education lesson. Comput. Schools 35, 4 (2018), 249267.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Chatti Mohamed Amine, Marinov Momchil, Sabov Oleksandr, Laksono Ridho, Sofyan Zuhra, Mohamed Fahmy Yousef Ahmed, and Schroeder Ulrik. 2016. Video annotation and analytics in CourseMapper. Smart Learn. Environ. 3, 1 (2016), 121.Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Chi Michelene T. and Wylie Ruth. 2014. The ICAP framework: Linking cognitive engagement to active learning outcomes. Educational Psychologist 49, 4 (2014), 219243.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Chi Michelene, Adams Joshua, Bogusch Emily B., Bruchok Christiana, Kang Seokmin, Lancaster Matthew, Levy Roy, et al. 2018. Translating the ICAP theory of cognitive engagement into practice. Cogn. Sci. 42, 6 (2018), 17771832.Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Chipman Hugh A., George Edward I., and McCulloch Robert E.. 2010. BART: Bayesian additive regression trees. Ann. Appl. Statist. 4, 1 (2010), 266298.Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Cico Orges, Jaccheri Letizia, Nguyen-Duc Anh, and Zhang He. 2021. Exploring the intersection between software industry and software engineering education—A systematic mapping of software engineering trends. J. Syst. Softw. 172 (2021), 110736.Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Clarke S. J., Peel D. J., Arnab S., Morini L., Keegan H., and Wood O.. 2017. EscapED: A framework for creating educational escape rooms and interactive games to for higher/further education. Int. J. Serious Games 4 (2017), 7386.Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Coccoli Mauro, Torre Ilaria, and Galluccio Ilenia. 2023. User experience evaluation of Edurell interface for video augmentation. Multimedia Tools Appl. (2023). 123.Google ScholarGoogle Scholar
  15. [15] Conkey Curtis, Bowers Clint, Cannon-Bowers Janis, and Sanchez Alicia. 2013. Machinima and video-based soft-skills training for frontline healthcare workers. Games Health: Res. Dev. Clin. Appl. 2, 1 (2013), 3943.Google ScholarGoogle Scholar
  16. [16] Correia Nuno and Chambel Teresa. 1999. Active video watching using annotation. In Proceedings of the 7th ACM International Conference on Multimedia (Part 2). 151154.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. [17] Cronin Michael W. and Cronin Karen A.. 1992. Recent empirical studies of the pedagogical effects of interactive video instruction in “soft skill” areas. J. Comput. High. Educ. 3, 2 (1992), 53-85.Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Davis Fred D.. 1989. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quart. (1989), 319340.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] Maria de Souza Almeida Lilian. 2019. Understanding Industry's Expectations of Engineering Communication Skills. Doctoral dissertation, Utah State University.Google ScholarGoogle Scholar
  20. [20] Dimitrova Vania, Mitrovic Antonija, Piotrkowicz Alicja, Lau Lydia, and Weerasinghe Amali. 2017. Using learning analytics to devise interactive personalised nudges for active video watching. In Proceedings of the 25th ACM User Modeling, Adaptation and Personalization Conference, M. Bielikova, E. Herder, F. Cena, and M. Desmarais, (Eds.). 2231.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] Dimitrova Vania and Mitrovic Antonija. 2022. Choice architecture for nudges to support constructive learning in active video watching. Int. J. Artif. Intell. Educ. 32, 4 (2022), 892930.Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Djaouti D., Alvarez J., Jessel J. P., and Rampnoux O.. 2011. Origins of serious games. Serious Games Edutain. Appl. (2011), 2543.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Dodson Samuel, Roll Ido, Fong Matthew, Yoon Dongwook, Harandi Negar M., and Fels Sidney. 2018. An active viewing framework for video-based learning. In Proceedings of the 5th Annual ACM Conference on Learning at Scale. ACM, 14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Vincent Dorie, Jennifer Hill, Uri Shalit, Marc Scott, and Dan Cervone. 2019. Automated versus do-it-yourself methods 7 for causal inference: Lessons learned from a data analysis competition. Stat. Sci. 34, 1 (2019), 43-68.Google ScholarGoogle Scholar
  25. [25] Galster Matthias, Mitrovic Antonija, and Gordon Matthew. 2018. Toward enhancing the training of software engineering students and professionals using active video watching. In Proceedings of 40th International Conference on Software Engineering: Software Engineering Education and Training Track. ACM, 58.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. [26] Galster Matthias, Mitrovic Antonija, Malinen Sanna, and Holland Jay. 2022. What soft skills does the software industry* really* want? An exploratory study of software positions in New Zealand. In Proceedings of the 16th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement. 272282.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. [27] Galster Matthias, Mitrovic Antonija, Malinen Sanna, Holland Jay, and Peiris Pasan. 2023. Soft skills required from software professionals in New Zealand. Inf. Softw. Technol. 160 (2023), 107232.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. [28] Geithner Silke and Menzel Daniela. 2016. Effectiveness of learning through experience and reflection in a project management simulation. Simul. Gaming 47, 2 (2016), 228-256.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. [29] Graßl Isabella, Fraser Gordon, Trieflinger Stefan, and Kuhrmann Marco. 2023. Exposing software engineering students to stressful projects: Does diversity matter? In Proceedings of the 45th International Conference Software Engineering: Software Engineering Education and Training (ICSE-SEET’23). 210222.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Guo Philip J., Kim Juho, and Rubin Rob. 2014. How video production affects student engagement: An empirical study of MOOC videos. In Proceedings of the 1st ACM Conference on Learning@ Scale Conference. 4150.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. [31] Sandra G. Hart. 2006. NASA-task load index (NASA-TLX); 20 years later. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 50, 9 (2006), 904-908.Google ScholarGoogle Scholar
  32. [32] Hill Jennifer, Linero Antonio, and Murray Jaared. 2020. Bayesian additive regression trees: A review and look forward. Annu. Rev. Stat. Appl. 7 (2020), 251278.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Hill Jennifer, Perrett George, and Dorie Vincent. Machine learning for causal inference. In Handbook of Multivariate Matching and Weighting for Causal Inference, P. Rosenbaum, D. Small, and J. Zubizarreta (Eds.), Chapman and Hall/CRC, 415-444.Google ScholarGoogle Scholar
  34. [34] Iacob Claudia and Faily Shamal. 2019. Exploring the gap between the student expectations and the reality of teamwork in undergraduate software engineering group projects. Syst. Softw. 157 (2019), 110393.Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Koedinger Kenneth R., Kim Jihee, Jia Julianna Zhuxin, McLaughlin Elizabeth A., and Bier Norman L.. 2015. Learning is not a spectator sport: Doing is better than watching for learning from a MOOC. In Proceedings of the 2nd ACM Conference on Learning@ Scale. 111120.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. [36] Kovacs Geza. 2016. Effects of in-video quizzes on MOOC lecture viewing. In Proceedings of the 3rd ACM Conference on Learning@ Scale. 3140.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. [37] Krippendorff Klaus. 2010. Krippendorff's Alpha. SAGE Publications. p. 670.Google ScholarGoogle Scholar
  38. [38] Richard Landis J. and Koch Gary G.. 1977. The measurement of observer agreement for categorical data. Biometrics (1977), 159174.Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Lau Lydia, Mitrovic Antonija, Weerasinghe Amali, and Dimitrova Vania. 2016. Usability of an active video watching system for soft skills training. In Proceedings of the 1st International Workshop on Intelligent Mentoring Systems, Intelligent Tutoring Systems Conference.Google ScholarGoogle Scholar
  40. [40] Shi Li Ze, Nawar Arony Nowshin, Devathasan Kezia, and Damian Daniela. 2023. Software is the easy part of software Engineering—Lessons and experiences from a large-scale, multi-team Capstone course. In Proceedings of the 45th International Conference on Software Engineering: Software Engineering Education and Training (ICSE-SEET’23). 223234.Google ScholarGoogle Scholar
  41. [41] Liu Ching, Yang Chi-Lan, Williams Joseph Jay, and Wang Hao-Chuan. 2019. Notestruct: Scaffolding note-taking while learning from online videos. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems. 16.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. [42] McGowan Nadia, López-Serrano Aída, and Burgos Daniel. 2023. Serious games and soft skills in higher education: A case study of the design of compete!. Electronics 12, 6 (2023), 1432.Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Mikolov Tomas, Sutskever Ilya, Chen Kai, Corrado Geg S., and Dean Jeff. 2013. Distributed representations of words and phrases and their compositionality. Adv. Neural Inf. Process. Syst. 26 (2013).Google ScholarGoogle Scholar
  44. [44] Mirriahi Negin, Jovanović Jelena, Lim Lisa-Angelique, and Lodge Jason M.. 2021. Two sides of the same coin: Video annotations and in-video questions for active learning. Educ. Technol. Res. Dev. 69, 5 (2021), 25712588.Google ScholarGoogle ScholarCross RefCross Ref
  45. [45] Mitrovic Antonija, Dimitrova Vania, Lau Lydia, Weerasinghe Amali, and Mathews Moffat. 2017. Supporting constructive video-based learning: Requirements elicitation from exploratory studies. In Artificial Intelligence in Education, E. André et al. (Eds.). 224237.Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] Mitrovic Antonija, Gordon Matthew, Piotrkowicz Alicja, and Dimitrova Vania, V. 2019. Investigating the effect of adding nudges to increase engagement in active video watching. In Proceedings of the 20th International Conference on Artificial Intelligence in Education, LNAI Vol. 11625 S. Isotani et al. (Eds.). Springer Nature Switzerland, 320332Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. [47] Mohammadhassan Negar, Mitrovic Antonija, and Neshatian Kourosh. 2022. Investigating the effect of nudges for improving comment quality in active video watching. Comput. Educ. 176 (2022), 104340. Google ScholarGoogle ScholarCross RefCross Ref
  48. [48] Mohammadhassan Negar and Mitrovic Antonija. 2022. Investigating the effectiveness of visual learning analytics in Active Video Watching. In Artificial Intelligence in Education, M. M. Rodrigo, N. Matsuda, A. I. Cristea, V. Dimitrova, (Eds.), Lecture Notes in Computer Science, Vol. 13355. Springer, Cham, 127139.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. [49] Morales-Trujillo Miguel Ehécatl, Galster Matthias, Gilson Fabian, and Mathews Moffat. 2021. A three-year study on peer evaluation in a software engineering project course. IEEE Trans. Educ. 65, 3 (2021), 409418.Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. [50] Pennington Jeffrey, Socher Richard, and Manning Cristopher D.. 2014. Glove: Global vectors for word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’14). 15321543.Google ScholarGoogle ScholarCross RefCross Ref
  51. [51] Picard Cyril, Hardebolle Cécile, Tormey Roland, and Schiffmann Jürg. 2022. Which professional skills do students learn in engineering team-based projects? Eur. J. Eng. Educ. 47, 2 (2022), 314332.Google ScholarGoogle ScholarCross RefCross Ref
  52. [52] Poquet Oleksandra, Lim Lisa, Mirriahi Negin, and Dawson Shane. 2018. Video and learning: A systematic review (2007–2017). In Proceedings of the 8th International Conference on Learning Analytics and Knowledge. 151160.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. [53] Prenner Nils, Klünder Jil, and Schneider Kurt. 2018. Making meeting success measurable by participants’ feedback. In Proceedings of the 3rd International Workshop on Emotion Awareness in Software Engineering. 2531.Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. [54] Risko Evan F., Foulsham Tom, Dawson Shane, and Kingstone Alan. 2012. The collaborative lecture annotation system (CLAS): A new TOOL for distributed learning. IEEE Trans. Learn. Technol. 6, 1 (2012), 413.Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. [55] Reher Vanessa, Rehbein G., and Reher Peter. 2018. Integrating video recording and self-reflection to enhance communication skills training for dental students. In International Conference on the Development of Biomedical Engineering in Vietnam. Springer, Singapore, 715719.Google ScholarGoogle Scholar
  56. [56] Rovai Alfred, Wighting Mervyn, Baker Jason, and Grooms Linda. 2009. Development of an instrument to measure perceived cognitive, affective, and psychomotor learning in traditional and virtual classroom higher education settings. Internet High. Educ. 12, 1 (2009), 713.Google ScholarGoogle ScholarCross RefCross Ref
  57. [57] Schipper Mariecke and van der Stappen Esther. 2018. Motivation and attitude of computer engineering students toward soft skills. In Global Engineering Education Conference. IEEE. 217222.Google ScholarGoogle ScholarCross RefCross Ref
  58. [58] Sedelmaier Yvonne and Landes Dieter. 2014. Practicing soft skills in software engineering: A project-based didactical approach. In Overcoming Challenges in Software Engineering Education: Delivering Non-Technical Knowledge and Skills. IGI Global, 161179.Google ScholarGoogle ScholarCross RefCross Ref
  59. [59] Sousa Micael. 2021. Serious board games: Modding existing games for collaborative ideation processes. Int. J. Serious Games 8, 2 (2021), 129146.Google ScholarGoogle ScholarCross RefCross Ref
  60. [60] Taber Keith S.. 2018. The use of Cronbach's alpha when developing and reporting research instruments in science education. Res. Sci. Educ. 48, 6 (2018), 12731296.Google ScholarGoogle ScholarCross RefCross Ref
  61. [61] Takashima Akio, Yamamoto Yasuhiro, and Nakakoji Kumiyo. 2004. A model and a tool for active watching: Knowledge construction through interacting with video. In Proceedings of INTERACTION: Systems, Practice and Theory. 331358.Google ScholarGoogle Scholar
  62. [62] Tang Stephen, Hanneghan Martin, and El-Rhalibi Abdennour. 2009. Introduction to games-based learning. In Games-Based Learning Advancements for Multi-Sensory Human Computer Interfaces: Techniques and Effective Practices. IGI Global, New York, 117.Google ScholarGoogle ScholarCross RefCross Ref
  63. [63] Yousef Ahmed Mohamed Fahmy, Chatti Mohamed Amine, and Schroeder Ulrik. 2014. The state of video-based learning: A review and future perspectives. Int. J. Adv. Life Sci. 6, 3 (2014), 122135.Google ScholarGoogle Scholar
  64. [64] Wats Meenu and Kumar Wats Rakesh. 2009. Developing soft skills in students. In. J. Learn. 15, 12 (2009).Google ScholarGoogle Scholar

Index Terms

  1. Effectiveness of Video-based Training for Face-to-face Communication Skills of Software Engineers: Evidence from a Three-year Study

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Computing Education
      ACM Transactions on Computing Education  Volume 23, Issue 4
      December 2023
      213 pages
      EISSN:1946-6226
      DOI:10.1145/3631944
      • Editor:
      • Amy J. Ko
      Issue’s Table of Contents

      Copyright © 2023 Copyright held by the owner/author(s).

      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 11 December 2023
      • Online AM: 3 November 2023
      • Accepted: 19 October 2023
      • Revised: 17 August 2023
      • Received: 28 April 2023
      Published in toce Volume 23, Issue 4

      Check for updates

      Qualifiers

      • research-article
    • Article Metrics

      • Downloads (Last 12 months)423
      • Downloads (Last 6 weeks)123

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader