The current landscape of higher education, characterized by the changing demography of college students and public health concerns amid the COVID-19 pandemic, has prompted institutions to employ a more comprehensive approach to the evaluation of admissions applicants’ credentials, personal characteristics, and past experiences. The proliferation of test-optional admissions policies (FairTest National Center for Fair & Open Testing, 2023; Furuta, 2017) and holistic approaches to the evaluation of college admission applicants necessitates a more comprehensive understanding of applicants’ potential to succeed in postsecondary education. Admissions models vary widely depending on institutions’ mission and philosophical orientation. For example, some institutions hold that any student is entitled to a college education whereas others hold that college access is a reward for prior academic success (Perfetto et al., 1999). Regardless of an institution’s philosophical basis for admissions decision-making and the corresponding criteria it uses to determine applicants’ eligibility for admission, most admissions models can be improved by including measures that reliably, accurately, and comprehensively evaluate applicants’ potential for success in college with concern for the fair and equitable distribution of educational opportunities (Camara & Kimmel, 2005; Zwick, 2017).

Measures lack validity when they are not reproducible (or reliable, as defined in psychometrics) for applicants from all backgrounds (American Educational Research Association [AERA], American Psychological Association [APA], & National Council on Measurement in Education [NCME], 2014; Drost, 2011). For example, scholars have documented concerns about the use of standardized tests in college admissions, citing differential prediction by reported race (e.g., Blau et al., 2004), gender (e.g., Leonard & Jiang, 1999), and socioeconomic status (e.g., Rothstein, 2004; Sackett et al., 2012). Research has also demonstrated the limited predictive validity of tests beyond the first year of undergraduate study (e.g., Berry & Sackett, 2009; Sackett & Kuncel, 2018; Sackett et al., 2012).

Although an increasing number of institutions have adopted test-optional policies (FairTest National Center for Fair & Open Testing, 2023; Furuta, 2017) or embraced holistic review (Bastedo et al., 2018; Hossler et al., 2019), few institutions have replaced standardized tests with new, proprietary measures of applicants’ potential to succeed in college. Rather, many institutions have redistributed the weight given to standardized test scores to other aspects of the admissions application such as high school grade point average (HSGPA; Galla et al., 2019; Sweitzer et al., 2018), a practice which may perpetuate barriers to equitable access for students who are systemically marginalized and have limited access to educational opportunities such as enrollment in Advanced Placement courses (Kolluri, 2018). Accordingly, recent research has highlighted the need to provide admissions officers with richer information about prospective students. When used as part of a holistic admissions model, information about applicants’ high school and home context may mitigate bias against disadvantaged students and help to promote equitable access to postsecondary education (Bastedo et al., 2022).

As colleges and universities employ holistic assessments of applicants, the increased inclusion of motivational and developmental constructs in the admissions process stands out as a promising possibility. Sedlacek (2004) advanced the view that non-cognitive constructs, such as achievement motivation, may be used to expand the predictive potential of more frequently relied upon cognitive measures. Motivational constructs reflect the resources and abilities students need to effectively adapt to the various academic, emotional, and social challenges associated with attaining a postsecondary education (Allen et al., 2009; Camara & Kimmel, 2005; Kaplan et al., 2017; Sedlacek, 2017). Scholars suggest flexible admissions policies that incorporate a wide variety of student characteristics—including psychosocial and motivational constructs—will likely have the added benefit of increasing diversity among the student body by promoting more equitable access to postsecondary education (Soares, 2015; Zwick, 2002).

Despite evidence of considerable associations between several psychosocial constructs and college student outcomes, an adequately valid integrated inventory of these constructs has yet to be developed (Le et al., 2005; Sedlacek, 2004). Such an inventory could prove highly valuable, especially as student demographics shift and an increasing number of institutions adopt test-optional policies. We respond to the need for reliable and valid measures of postsecondary promise by investigating the predictive validity of a proprietary measure of admissions applicants’ motivational-developmental attributes. Our measure includes select non-cognitive constructs derived from the social-cognitive and developmental-constructivist domains. The measure was developed in 2015 to coincide with the implementation of a test-optional admissions policy at a large, public, urban research university located in the Mid-Atlantic region of the United States. Guided by motivational and student development theories, we addressed the following research questions:

  1. 1.

    Is the measure of motivational-developmental dimensions a reliable predictor of undergraduate GPA (UGPA), and 4- and 5-year bachelor’s degree completion?

  2. 2.

    Is the measure of motivational-developmental dimensions a statistically significant predictor of UGPA?

  3. 3.

    Is the measure of motivational-developmental dimensions a statistically significant predictor of 4- and 5-year bachelor’s degree completion?

  4. 4.

    Which specific motivational-developmental dimensions, if any, predict UGPA and four- and five-year bachelor’s degree completion?

Literature Review

Although evidence suggests that high-achieving, underserved students particularly benefit from post-college achievements such as increased lifetime earnings (Ma et al., 2019), admissions practices frequently benefit students with higher socioeconomic status (Bowman & Bastedo, 2018). As inequitable admissions practices have persisted, the criteria and methodologies colleges and universities use to evaluate admissions applicants have been the subject of substantial empirical investigation (e.g., Bowman & Bastedo, 2018; Hossler et al., 2019; National Association for College Admission Counseling, 2016; Robbins et al., 2004). Previous research has predominately focused on cognitive factors such as standardized test scores (i.e., SAT, ACT) and other variables with cognitive and non-cognitive attributes such as high school course grades, HSGPA, and class rank (Burton & Ramist, 2001; Galla et al., 2019; Sweitzer et al., 2018). However, these criteria have been criticized as unreliable and potentially biased. In recognition of these shortcomings, scholars have attempted to validate new measures of postsecondary educational promise (e.g., Le et al., 2005; Oswald et al., 2004; Robbins et al., 2006; Sedlacek, 2004; Thomas et al., 2007).

Predictive Validity of Traditional Criteria in College Admissions

Traditionally, the criteria most widely considered in the undergraduate admissions process measure cognitive and reasoning abilities and therefore may be used to assess subject-specific knowledge and skills (Camara & Kimmel, 2005). Research has consistently demonstrated that standardized test scores and HSGPA each contribute to the prediction of academic performance and student persistence (e.g., Allen et al., 2008; Bridgeman et al., 2008; Kobrin et al., 2008; Robbins et al., 2004, 2006; Westrick et al., 2015; Willingham et al., 2002; see also Mattern & Patterson, 2011a, 2011b for a series of reports on SAT validity for predicting grades and persistence). However, previous studies have identified limitations of these variables, suggesting, for example, that SAT predictions may overestimate first-year college GPA and obscure background characteristics that are more accurately predictive of college performance (e.g., Rothstein, 2004; Soares, 2015; Syverson, 2007). Others have identified problematic variability of high school grading standards and course rigor (Atkinson & Geiser, 2009; Bowers, 2011; Buckley et al., 2018; Burton & Ramist, 2001; Camara & Kimmel, 2005; Syverson, 2007; Thorsen & Cliffordson, 2012; Westrick et al., 2015; Zwick, 2002), which may diminish the reliability of HSGPA and high school course grades for high-stakes admissions decisions.

Personal essays are a common component of holistic admissions reviews, offering insight into applicants’ experiences, challenges, goals, and interests (Todorova, 2018). The inclusion of personal essays in college applications is often justified as a means by which applicants can demonstrate character strengths and talents that may not be evident in academic records or standardized test scores. Personal essays may be used to evaluate applicants’ non-cognitive attributes such as creativity and self-efficacy (Pretz & Kaufman, 2017). Although critics argue that personal essay assessments, like standardized tests, may reflect pervasive social class- and race-based inequities (Alvero et al., 2021; Rosinger et al., 2021; Todorova, 2018; Warren, 2013), foregoing sole reliance on high school grades and standardized test scores by including personal essays in admissions decision-making is often championed as a way to expand college access to traditionally underserved students (Hossler et al., 2019).

Predictive Validity of Psychosocial Factors in College Admissions

The use of holistic approaches to undergraduate admissions has emerged as a strategy for expanding the predictive potential of traditional admissions criteria while addressing the disparities these criteria present (Bastedo et al., 2018; Hossler et al., 2019). Incorporating non-cognitive psychosocial factors into the admissions process has the potential to incrementally enhance the predictive power achieved when relying solely on cognitive variables (Allen et al., 2009; Sedlacek, 2004, 2017), as these psychosocial constructs are largely distinct from commonly used cognitive measures (Camara & Kimmel, 2005). Additionally, many psychosocial factors are more malleable than student demographic characteristics and traditional measures of cognitive ability, allowing for the possibility of interventions that may make college success more likely for students once they are enrolled (Allen et al., 2009; Robbins et al., 2004).

Prior research has identified associations between several non-cognitive psychosocial attributes and outcomes related to educational success (Robbins et al., 2006). Research pertaining to specific psychosocial factors has revealed positive associations between self-efficacy and various components of college success, including academic adjustment (Chemers et al., 2001), academic performance (Bandura, 1986; Krumrei-Mancuso et al., 2013; Robbins et al., 2006; Vuong et al., 2010; Zajacova et al., 2005), college satisfaction (Chemers et al., 2001; DeWitz & Walsh, 2002), and persistence and retention (Davidson & Beck, 2006; Robbins et al., 2004, 2006). Conscientiousness, a personality trait that is part of the “Big Five” factor model (Digman, 1990; Goldberg, 1993), has also consistently been shown to predict academic performance to an even greater extent than standardized test scores and HSGPA (Lounsbury et al., 2003; Nguyen et al., 2005; Noftle & Robins, 2007). Additionally, academic self-concept, or how well someone feels they can learn, has been identified as a significant predictor of academic performance, particularly among students from minoritized racial and ethnic groups and low-income backgrounds (Astin, 1992; Bailey, 1978; Gerardi, 2005; Sedlacek, 2004, 2017). Despite inconsistencies among the findings, scholars found coping and attributional styles to be both directly and indirectly associated with outcomes including student motivation, academic performance (LaForge & Cantrell, 2003; Martinez & Sewell, 2000; Rowe & Lockhart, 2005; Struthers et al., 2000; Yee et al., 2003), health status (Sasaki & Yamasaki, 2007), and happiness (O’Donnell et al., 2013). Additionally, prior studies sought to determine the extent to which psychosocial constructs reflect social inequities and thus become potentially biased criteria in college admissions. For instance, evidence suggests that the development of self-efficacy and self-concept are influenced by one’s social class (Easterbrook et al., 2020; Usher et al., 2019; Wiederkehr et al., 2015).

Theoretical Framework

We examined the validity of a measure of several motivational-developmental dimensions designed to predict undergraduate students’ academic performance and degree completion. These dimensions included causal attributions, coping strategies, relevant experiences, self-awareness, self-authorship, self-concept, and self-set goals. These constructs reflect the need for college students to have motivational resources to facilitate effort and persistence as well as developmental maturity to apply these resources adaptively in the context of college (Kaplan et al., 2017). The social-cognitive motivational (Bandura, 1986, 2006) and constructivist-developmental (Kegan, 1994) perspectives provide corresponding complementary theoretical frameworks from which the motivational-developmental dimensions we examined were drawn.

Social-Cognitive Motivational Perspective

The social-cognitive motivational perspective underscores the contribution of the combined influence of students’ high competence perceptions to persistence, adjustment, coping, and performance; attributions of success and failure to internal, malleable, and controllable causes; self-setting of autonomous, challenging, specific, and realistic goals; and coping with difficulties and failure by focusing on analyzing the problem, regulating negative emotions, and applying context-specific strategies (Bandura, 2006). The dimensions we examined include four constructs based on the social-cognitive motivational perspective: self-concept (i.e., individuals’ self-perceptions of ability; Marsh & Martin, 2011); self-set goals (i.e., an individual’s personally determined goals for themselves and for others; Locke & Latham, 2002; Vansteenkiste & Ryan, 2013); causal attributions (i.e., cognitive-affective explanations of the causes of success and failure; Hong et al., 1999; Weiner, 2010); and coping strategies (i.e., purposeful behavioral, emotional, and cognitive actions for responding to situations perceived to challenge an individual’s resources; Compas et al., 2001). These social-cognitive motivational constructs are related. However, they constitute distinct attributes that combine to form an adaptive motivational mindset (Dweck & Leggett, 1988).

Developmental-Constructivist Perspective

The developmental-constructivist perspective reflects the role of cross-contextual capacities for intentional and purposeful self-reflection and self-regulation of knowledge, relationships, goals, and actions related to coping and growth (Kegan, 1994). The motivational-developmental dimensions we examined include two constructs based on the developmental-constructivist perspective: self-awareness (i.e., the ability to consider oneself as an object for reflection, monitoring, and learning; Silvia & Duval, 2001) and self-authorship (i.e., the agentic capacity for an individual to generate and regulate their beliefs, decisions, identity, and social relationships; Baxter-Magolda et al., 2010).

Methodology

Informed by the literature as well as theoretical, empirical, ethical, and logistical factors, Kaplan et al. (2017) advanced the operational definitions of the motivational-developmental dimensions presented in Table 1. These definitions guided the development of an essay-based measure of motivational-developmental dimensions implemented as part of a test-optional admissions policy at a large, public, urban research university located in the Mid-Atlantic region of the United States. We investigated the predictive validity of this measure using multiple regression to analyze data collected from 886 first-year undergraduate students who applied for test-optional admissions and subsequently enrolled at the participating institution. Table 2 presents descriptive statistics on the demographic characteristics of these students.

Table 1 Definitions of the motivational-developmental dimensions (Kaplan, 2015; Kaplan et al., 2017)
Table 2 Descriptive statistics on test-optional admissions applicants

Procedures

As part of the test-optional admissions process, students provided responses to four short-answer essay questions developed by Kaplan et al. (2017). Table 3 presents descriptions of the essay questions and the primary motivational-developmental dimensions measured within each question. These essays were presented to students as a part of their initial admissions application. Students did not have access to the essay questions in advance and were expected to complete them without preparation. Following several rounds of training, norming, and rubric calibration, two readers used a rubric created by Kaplan (2015) and Kaplan et al. (2017; Table 4) to independently score the primary motivational-developmental dimensions within each essay question according to the following scale: 1 point (does not articulate the dimension), 4 points (narrowly articulates the dimension), 7 points (generally articulates the dimension), 10 points (explicitly articulates the dimension). Each essay question received a total score between 4 and 40 points per reader. A third reader scored an essay response if there was a variance of 5 points or more between the scores produced by the two initial readers on a given essay question. For 11.3% of the essays, a third reader’s score was accepted and the scores produced by both initial readers were rejected. Using this methodology, a total motivational-developmental dimension score (MDS) from 4 to 40 was produced by averaging the scores produced for each essay question. The MDS was included as part of an admissions index that the institution used to make undergraduate admission decisions. This index included a high school academic performance rating (HSGPA and course grades), MDS or standardized test score (depending on whether the applicant applied under the test-optional policy), and an admissions counselor rating. We obtained all student data from the Institutional Research department at the participating institution. Recognizing that college participation and completion varies by family income (Ma et al., 2019), we collected estimated county-level median household income data from the U.S. Department of Commerce Bureau of the Census Small Area Income and Poverty Estimates Program (2022).

Table 3 Test-optional essay questions and motivational-developmental dimensions
Table 4 Rubrics to assess motivational-developmental dimensions (Kaplan, 2015)

Variables

To reduce the possibility of omitted variable bias, the data we analyzed included a range of student demographic, financial, admissions, and academic information collected as part of the undergraduate admissions process at the participating institution. However, as is consistent with single institution studies, our data did not include all possible variables identified in the literature that may explain our outcomes of interest.

Our predictor variable MDS was a composite score of seven motivational-developmental dimensions (attributions of successes and failures, coping, relevant experiences, self-authorship, self-awareness, self-concept, and self-set goals). UGPA and 4- and 5-year graduation served as our outcome variables. The UGPA variable reflected the cumulative UGPA earned as of a student’s final semester enrolled. Four-year graduation was a dichotomous variable that represented bachelor’s degree completion within eight or fewer consecutive academic semesters. Five-year graduation was a dichotomous variable that represented bachelor’s degree completion in nine or ten consecutive academic semesters.

Based on previous studies that investigated the correlations between student characteristics (e.g., race, gender, and socioeconomic status), academic performance (e.g., HSGPA and UGPA), and baccalaureate degree completion (Mayhew et al., 2016; Pascarella & Terenzini, 2005), we included students’ race, gender, and socioeconomic status (approximated by Pell Grant receipt status and county-level median household income) as covariates in our regression analyses to account for the effects these variables may have on college outcomes. We dummy coded the categorical covariate for students’ race given the five racial groups included in our institutional data. Students with a self-reported race of American Indian, Multiple Ethnicities, Pacific Islander, or Unknown or a status of International were categorized as “Other Race” due to the limited racial representativeness of the sample (see Table 2). Additionally, we retained incongruent designations of race (e.g., African American and White) as these categories reflect those in the institutional dataset. We utilized Pell Grant receipt and a standard score of estimated median household income for the county in which students resided at the time of their application as an approximation of students’ socioeconomic status because Expected Family Contribution (EFC) data were missing for 75 students (8.5%) at the time of their matriculation. We also included HSGPA as a covariate in our regression analyses to account for students’ prior academic performance and an admissions counselor rating of students’ extracurricular activities, personal essay, and high school context. We included these variables in our analyses because of the associations between these commonly utilized admissions criteria and relevant postsecondary outcomes such as undergraduate GPA and graduation (Allensworth & Clark, 2020; Bastedo et al., 2018; Galla et al., 2019; Huang et al., 2017).

The admission counselor rating variable was recorded on a scale of 1 to 10, with 10 reflecting a counselor’s highest positive rating of the applicant. Additionally, we included a categorical variable for students’ academic program at matriculation to account for differences in the rigor and grading standards across academic disciplines and fields (Arcidiacono et al., 2012; Martin et al., 2017). The academic program categories in our dataset included Arts & Humanities, Business & Social Sciences, Health Sciences, and Sciences & Mathematics. Table 5 includes the means, standard deviations, and correlations for all variables.

Table 5 Means, standard deviations, and correlations for study variables

Data Analysis

We used SPSS version 28 (IBM, 2020) to compute descriptive statistics and to conduct our correlation and regression analyses. We also used Lenhard and Lenhard’s (2014) calculator to compare correlations from independent samples. Prior to conducting our analyses, we examined our dataset to identify systematically missing cases and tested our data to ensure they met the assumptions associated with the analytical techniques we used (see “Appendix” for the results of our assumption tests). We removed five cases (0.6%) for students who matriculated at the participating institution but withdrew before earning course grades in their first semester. Additionally, we removed one case (0.1%) with a missing MDS, one case (0.1%) with a missing HSGPA, and 16 cases with missing admissions counselor ratings (1.8%). Accordingly, our analyses included all cases for which there were complete data. We indicate the analytical sample size for each analysis in table notes.

To test the reliability of the MDS measure, first we computed a Light’s kappa statistic (Light, 1971) to measure interrater reliability between the essay readers as there was not a fixed number of readers for each essay question. We computed Light’s kappa values by calculating Cohen’s kappa and averaging these values across all rater pairs. Second, we computed Pearson correlation coefficients to determine the strength and direction of the associations between the MDS scores and our outcome variables by student demographic characteristics. Third, we compared the resulting correlation coefficients to test for statistically significant differences across student subgroups (Diedenhofen & Musch, 2015; Lenhard & Lenhard, 2014). Specifically, we ran three separate tests to compare the nine correlation coefficients for each of our outcome variables across the student demographic characteristics in our dataset (race, gender, Pell Grant receipt). Fourth, we ran separate regression analyses using interaction terms to determine if the MDS was moderated by student demographic characteristics including race, gender, and Pell Grant receipt status. To create our interaction terms, we centered the MDS to reduce multicollinearity caused by higher-order terms. Lastly, to nuance our findings, we entered the individual motivational-development dimensions as separate variables in stepwise and combined regression models to examine which dimensions, if any, were statistically significant predictors of our outcome variables. We used multivariate linear and logistic regression to investigate the accuracy of the MDS in predicting UGPA and four- and five-year degree completion, respectively. We used the following regression equation to predict our outcome variables:

$$Y_{i} = \beta_{{\text{i}}} X_{{\text{i}}} + \beta_{{\text{i}}} MDS_{{\text{i}}} + \, \varepsilon_{i} ,$$

where Yi is our outcome variable of interest (UGPA, 4-year, and 5-year graduation); βi is the coefficient of Xi, a given covariate in the model (e.g., HSGPA); βi is the slope of the line for the MDS, our coefficient of interest; MDSi is the value of the MDS for student i; and εi are the residuals or errors in the model.

Results

Research Question 1

For Research Question 1, we asked whether the MDS is a reliable predictor of UGPA and 4- and 5-year bachelor’s degree completion. The results of our interrater reliability analysis indicated slight agreement between readers across the individual motivational-development dimensions that comprise our measure. Light’s Kappa values ranged from κ = .132 (Coping) to κ = .238 (Relevant Experiences). Table 6 presents these results.

Table 6 Interrater reliability of motivational-developmental score

Pearson correlational analyses indicated statistically significant associations between the MDS score and the outcome variables for several student subgroups. We identified statistically significant correlations at the p < .01 level between the MDS and UGPA for female students (r = .12) and Pell Grant recipients (r = .14); 4-year graduation for female students (r = .14) and Pell Grant recipients (r = .17); and 5-year graduation for Asian students (r = .28), male students (r = .16), and Pell Grant recipients (r = .15). Table 7 presents these results.

Table 7 Correlations between MDS and outcome variables by student demographic characteristics

Given these findings, we tested the equality of these correlation coefficients since they were obtained from independent samples (Cohen & Cohen, 1983; Preacher, 2002). The tests did not identify two-tailed p values less than .05 between all correlation coefficients. By convention, this indicates that the differences between the correlation coefficients are not statistically significant (Cohen & Cohen, 1983).

Our moderator analyses using interaction terms between the MDS and student demographic characteristics (race, gender, Pell Grant recipient) yielded nonsignificant results except for the interaction between Asian students and the MDS for 5-year graduation, p = .009. Table 8 presents these results.

Table 8 Regression analyses with interaction terms

Research Question 2

For Research Question 2, we asked whether the MDS is a statistically significant predictor of UGPA. The MDS was statistically significant in the model (p = .013). However, the small β coefficient estimate (β = .022) suggests that a 1-point increase in the MDS is associated with a .022 increase in UGPA. The full linear regression model was statistically significant, R2 = .165, F(13, 861) = 12.876, p < .001, adjusted R2 = .152. Table 9 presents these results.

Table 9 Summary of regression analysis predicting UGPA

For robustness, we reran our analysis using stepwise regression to produce a covariate-only model and a model that includes the MDS. The addition of the MDS resulted in a statistically significant change in the F-statistic, F(1, 848) = 6.175, p = .013. This suggests that adding the MDS to the model marginally improved the prediction of UGPA compared to the covariate-only model, ΔR2 = .006.

Research Question 3

For Research Question 3, we asked whether the MDS is a statistically significant predictor of four- and five-year bachelor’s degree completion. To nuance the results, we reran our analyses for graduation using stepwise regression produce a covariate-only model and a model that includes the MDS.

Four-Year Graduation

The full logistic regression model to predict four-year graduation was statistically significant, X2(13) = 109.061, p < .001. The model explained 15.8% (Nagelkerke R2 = .158) of the variance in four-year graduation and correctly classified 64.6% of cases. Sensitivity was 50.5%, specificity was 45.6%, positive predictive value was 62.9%, and negative predictive value was 33.6%. The MDS was statistically significant in the model (p = .013). An increase in the MDS score was associated with a small increase in the likelihood of 4-year graduation (Exp(β) = 1.076). The addition of the MDS to the regression model resulted in a statistically significant yet minimal contribution to the explanation of the variance in four-year graduation, Nagelkerke ΔR2 = .009. Table 10 presents these results.

Table 10 Summary of logistic regression analysis predicting 4-year graduation

Five-Year Graduation

The full logistic regression model to predict 5-year graduation was statistically significant, X2(13) = 75.464, p < .001. The model explained 11.4% (Nagelkerke R2 = .114) of the variance in 5-year graduation and correctly classified 69.2% of cases. Sensitivity was 89.8%, specificity was 32.3%, positive predictive value was 70.4%, and negative predictive value was 63.7%. The MDS was not statistically significant in the model (p = .114). An increase in the MDS was associated with a small increase in the likelihood of 5-year graduation (Exp(β) = 1.05). The addition of the MDS to the regression model resulted in a statistically significant yet minimal contribution to the explanation of the variance in 4-year graduation, Nagelkerke ΔR2 = 0.003. Table 11 presents these results.

Table 11 Summary of logistic regression analysis predicting 5-year graduation

Research Question 4

For Research Question 4, we asked whether any specific motivational-developmental dimensions predicted UGPA and 4- and 5-year bachelor’s degree completion. Among the individual motivational-developmental dimensions we examined, only coping was a statistically significant predictor of UGPA (p = .009), 4-year graduation (p = .014), and 5-year graduation (p = .016). Table 12 presents these results.

Table 12 Summary of regression analyses with motivational-developmental dimensions as predictors of student outcomes

Discussion

Summary of the Findings

The results of our reliability tests indicated slight agreement between raters on the scoring of the motivational-developmental dimensions that comprise our measure. This finding suggests that variation in students’ MDS may be a function of variation in raters’ scores rather than true differences across students in the motivational-developmental constructs our measure was designed to assess. This is demonstrated by the slight agreement between raters on the Coping dimension, yet Coping was identified as the only statistically significant predictor of our outcomes of interest. Therefore, readers should interpret our results with caution as potential measurement error may lead to incorrect conclusions regarding the reliability and efficacy of our measure.

Our between-group and moderator analyses did not identify statistically significant subgroup differences between the MDS and our outcomes of interest, nor did we identify statistically significant results when we entered interaction terms of student demographic variables and the MDS into our regression models, except for Asian students. The relationships that do exist are small in magnitude as demonstrated by the correlations between MDS and 4-year graduation (r = .21) and 5-year graduation (r = .28) among Asian students. Despite the absence of statistically significant differences in the correlation coefficients between student subgroups, the MDS is associated with varying levels of validity across student racial groups as suggested by the results of our moderation analysis.

Our findings are generally consistent with prior research that has explored the relationship between non-cognitive variables and student outcomes. For example, using the Student Readiness Inventory, Komarraju et al. (2013) found that the non-cognitive variable Academic Discipline incrementally predicted UGPA over HSGPA and standardized test scores. Additionally, a study of an admissions innovation employing essay-based non-cognitive assessments at DePaul University found that non-cognitive variables helped to predict first-year success and retention, particularly for students from lower income and minoritized backgrounds (Sedlacek, 2017). Consistent with our findings, the predictive power of non-cognitive variables in prior studies was small. Taken together, the similarities across such studies hold that incorporating psychosocial-based assessments remains a promising direction. However, strengthening the reliability of our measure is a necessary first step before it has the potential to effectively promote more holistic, equitable admissions decisions.

Importance of the Findings

Holistic review is an intentional approach for expanding the predictive utility of traditional admissions criteria by considering the non-cognitive characteristics of applicants to make more accurate and equitable decisions about postsecondary educational opportunities (Bastedo et al., 2018; Hossler et al., 2019). However, literature has demonstrated the need for clear and consistent understanding of the validity of non-cognitive factors for predicting students’ success during and beyond college. An effective shift to more holistic admissions processes requires new, validated measures of postsecondary educational promise that meaningfully incorporate psychosocial attributes into admissions models. This is of particular importance as test-optional admissions policies proliferate. To this end, we examined the predictive validity of one such measure of students’ motivational-developmental dimensions.

Our measure did not meet what Cohen (1960) deemed an acceptable threshold of reliability to be considered a valid measure. Although our results show that the MDS makes a small contribution to the explanation of the variance in UGPA and 4-year graduation rates, we do not recommend its use for high-stakes decision-making given the propensity for measurement error in the absence of additional steps to improve interrater reliability. Measures, such as the one used in our study, must demonstrate reliability and consistent predictive validity for all groups of students. Assessments used for high-stakes decision making should be designed and implemented with care to avoid perpetuating inequitable admissions outcomes and presenting barriers in the college admissions process.

Nevertheless, we remain encouraged that the assessment of applicants’ psychosocial attributes may be a worthy component of the admissions process. Assessing non-cognitive dimensions may encourage admissions offices, and by extension institutions, to think holistically about their philosophical bases for admission decision-making (Perfetto et al., 1999) and how these philosophies pertain to their institution’s mission and values. Furthermore, research has demonstrated that even in admissions offices committed to holistic review, officers tend to predominantly rely on traditional academic criteria to make admissions decisions (Bowman & Bastedo, 2018). Although administering and evaluating non-cognitive assessments may require more time and effort from both admissions officers and prospective students, this approach is likely worthwhile if it promotes a more truly holistic, reliable, and accurate assessment of students’ potential.

Limitations

Despite the longitudinal nature of our study, we used degree completion outcome variables that are potentially influenced by a variety of factors not accounted for in our analyses. Consequently, we acknowledge that our study may be subject to omitted variable bias. For example, studies have suggested that variations in students’ tuition expenses net of financial aid affect retention and degree completion rates (Goldrick-Rab et al., 2016; Hossler et al., 2009; Nguyen et al., 2019; Welbeck et al., 2014; Xu & Webber, 2018). However, we endeavored to limit this bias by including variables that allowed us to comprehensively consider factors related to students’ admissions and outcomes, including background characteristics, academic variables, and admissions counselor ratings.

Student motivation and developmental characteristics are not fixed attributes; they are the product of self-reflection and accumulated life experiences (Bandura, 1994) and evolve over time (Mayhew et al., 2016; Pascarella & Terenzini, 2005). Scholars have argued that a holistic evaluation of students' previous academic performance and psychosocial attributes, developed over time in different contexts, cannot be captured by a single assessment (Sedlacek, 2004, 2017). However, our study measures students’ motivation and development at a specific time in their educational careers (i.e., during the college admissions process) and not as longitudinal constructs that may be continuously predictive of behaviors positively associated with educational outcomes.

Compared to the gender and race demographics of college students nationally, the relative homogeneity of the students in our study should frame any interpretation of the findings. For example, to feasibly conduct quantitative analyses using all participants’ data, several races had to be grouped into a single category (“Other”). This grouping may obscure important measurement and educational differences between student subgroups. Additionally, our analytic sample consists only of students who (a) applied for test-optional admissions, and (b) subsequently enrolled at the participating institution. This narrow sample further limits the generalizability of our findings.

Transfer students, graduate students, and denied admission applicants were excluded from the sample. Therefore, our findings may not generalize to these student populations, despite the need to validate predictors of success for students of all types (e.g., transfer students, international students, returning students) and at all levels (e.g., graduate, professional). Additionally, we conducted our study at a single institution located in a specific geographic region and of a particular institutional classification (i.e., public, urban, comprehensive research university). Therefore, our findings may not generalize to other institutional types.

Research has identified associations between essay content, SAT scores, and household income (Alvero et al., 2021). The essay readers in our study were trained to specifically score articulations of the motivational-developmental constructs as opposed to other aspects of analytical writing such as grammar, syntax, and mechanics. We believe this approach allowed us to more accurately capture the dimensions of interest rather than students’ background characteristics such as their socioeconomic status or writing ability. However, because our study used human readers with inherent subjectivity, the reliability of the MDS is subject to their level of agreement on the articulations of each dimension measured within the essay questions.

Implications for Practice and Future Research

Enrollment management leaders and other higher education professionals must weigh the practical significance of accounting for a nominal percentage of the variance in educational outcomes (e.g., UGPA and degree completion) against the introduction of additional requirements in the undergraduate admissions process. Because our results demonstrated slight agreement between readers on the MDS measure and our outcomes of interest are moderated by MDS for Asian students, the MDS should not be used for high-stakes decision making in the absence of other variables with empirically demonstrated reliability and validity. However, we remain encouraged that scores derived from more reliable measures may enrich applicant portfolios undergoing holistic review by providing admissions officers with more comprehensive information about how students may approach and adapt to challenges, insights that are of particular importance at a time when many admissions policies have been disrupted (Bastedo et al., 2022). Still, institutions should be mindful that the use of an essay-based measure in the college admissions process may limit application completion and present workload constraints for admissions officers.

Successfully transitioning to college, especially in times of social and economic uncertainty, requires the possession of coping skills and psychosocial resources that exist apart from a student’s cognitive ability. Therefore, researchers should continue to develop and work to validate measures of non-cognitive psychosocial factors relevant to higher education and related contexts. For example, we believe in the promise of alternative measures of psychosocial factors such as the development of an empirically validated integrated inventory or scale.

While our study found that psychosocial factors made a small contribution to the explanation of UGPA and degree completion, greater explanatory power might be obtained from more reliable measures of applicants’ coping skills and similar variables related to educational success such as perseverance and resiliency. Therefore, future research should examine the predictive value of psychosocial factors concerning additional outcomes that may be both constitutive of and indirectly related to educational success (e.g., mental wellness). Future research should also be conducted to discern the predictive validity of psychosocial factors among diverse student populations in various higher education contexts including different types of institutions and degree levels.

Throughout the development and implementation of our measure, steps were taken to mitigate the effect of bias. For instance, raters responsible for scoring the essays were required to attend several training sessions, throughout which the rubric was calibrated and normed. Furthermore, multiple raters read and scored each applicant essay. While these measures were employed to reduce bias and ensure the reliability of our instrument, continued research on the efficacy of these steps—and on the utility of the personal essay format in general—is warranted. With this in mind, we encourage future research that employs alternative methods of data collection such as situational judgement tests that present prospective students with a realistic scenario they may encounter in college and ask them to indicate how they would respond. Such efforts should be scaled to include multiple study sites to maximize external validity. Finally, future research should examine student outcomes at multiple institutions that have integrated proprietary measures of psychosocial variables into their admissions processes. Such studies could examine whether these measures effectively address the limitations of traditional admissions criteria, reduce predictive bias, and expand postsecondary educational opportunities while allowing the institution to enroll a highly qualified and diverse student body.

Conclusion

Many colleges and universities have adapted their admissions criteria and shifted away from the traditional reliance on standardized test scores as a key predictor of student outcomes. Institutions must also carefully consider which criteria most reliably, accurately, and equitably predict students’ college performance and persistence. Equitable access to postsecondary education and the benefits it confers may be advanced using novel measures that represent not only applicants’ prior academic achievement but their personalities, backgrounds, and the challenges they have overcome. Understanding applicants’ non-cognitive psychosocial attributes such as those assessed using our instrument may contribute to a more holistic understanding of how students can succeed in postsecondary education.