In lieu of an abstract, here is a brief excerpt of the content:

  • Development of a College Student Validation Survey: A Design-Based Research Approach
  • Toni A. May (bio), Dara N. Bright, Yiyun (Kate) Fan, Christopher Fornaro (bio), Kristin L. K. Koskey (bio), and Thomas Heverin (bio)

Rendón’s (1994) seminal research on validation theory (VT) provided a model for understanding how validating experiences can positively influence “culturally diverse” (p. 33) students in higher education. Validation is “an enabling, confirming and supportive process initiated by in- and out-of-class agents that fosters academic and interpersonal development” (Rendón, 1994, p. 44) and is critical for the transition, persistence, and success of college students (Rendón, 1994, 2002). Through this theoretical model, scholars have extensively explored how institutions can provide validating experiences by developing supportive learning environments for general undergraduate populations and specific groups such as Black, Latinx, low-income, first-generation, and two-year college students (e.g., Allen, 2016; Bauer, 2014). Many prior studies have relied on qualitative methods. While Rendón and Muñoz (2011) have called for further study of validation’s impact on student outcomes through quantitative methods, few quantitative instruments of VT exist. The primary tool used for assessing VT consists of two scales from the larger Diverse Learning Environments (DLE;

Hurtado et al., 2011) survey that have demonstrated their effectiveness for measuring academic validation in class and general interpersonal validation among college students at large (Hurtado et al., 2015). DLE scales were not, however, designed to match Rendón’s full four-component conception of VT (i.e., academic in-class, academic out-of-class, interpersonal in-class, interpersonal out-of-class). Thus, a new measure of VT is necessary to capture quantitative information aligned with Rendón’s model. The purpose of this study was to expand the field of quantitative VT research by presenting validity evidence from a new survey entitled the Validation Theory Survey (VTS) that was designed to align with Rendón’s VT model and to be used with undergraduate students. One overarching research question guided this study: To what extent did validity evidence (i.e., content, response process, consequential, and internal structure) support the use of the VTS to evaluate undergraduates’ perceptions of their academic and interpersonal validating experiences inside and outside higher education classrooms? [End Page 370]

METHODS

Educational design-based research (DBR) approaches emphasize a process of developing tools for a specific purpose through iterative methods of designing, testing, evaluating, and reflecting (Scott et al., 2020). DBR techniques implemented to develop and validate educational instruments have been effective when engaging in qualitative and quantitative field-testing methods (e.g., Sondergeld & Johnson, 2019) to evaluate multiple sources of validity evidence in concordance with The Standards for Educational and Psychological Testing (AERA et al., 2014). The Standards have urged instrument developers to evaluate multiple types of validity evidence, including (a) content or item alignment with construct; (b) response process, that is, participants understand the instrument as researchers intended; (c) consequential or controlling bias or potential negative impact on participants; (d) internal structure, that is unidimensional and reliable constructs formed; and (e) relationship to other variables or ensuring instrument outcomes are related to other hypothesized variables. This study presents findings related to the development and validation of the four VTS scales and reports on all types of validity evidence except the relationship to other variables, which will be examined in subsequent research.

Instrumentation

As part of a grant managed by the National Science Foundation in collaboration with the U.S. Office of Personnel Management and Department of Homeland Security, the VTS was created to evaluate project impact on undergraduate cybersecurity majors’ validating experiences in the program compared to those not participating. After a thorough review and synthesis of VT literature (e.g., Acevedo-Gil et al., 2015; Allen, 2016; Baber, 2018; Bauer, 2014; Rendón, 1994, 2002; Rendón Linares & Muñoz, 2011), 44 unique items were drafted to represent commonly noted validating experiences (17 academic in-class; 9 academic outof-class; 9 interpersonal in-class; 9 interpersonal out-of-class). These were rated on a 5-point scale (strongly disagree, disagree, agree, strongly agree, not applicable). Survey refinement based on two rounds of qualitative- and quantitative-field testing resulted...

pdf

Share