Doris Zahner
Council for Aid to Education, United States
Olivia Cortellini
Council for Aid to Education, United States
Kelly Rotholz
Council for Aid to Education, United States
Tess Dawber
Council for Aid to Education, United States
Doris Zahner
Council for Aid to Education, United States
Olivia Cortellini
Council for Aid to Education, United States
Kelly Rotholz
Council for Aid to Education, United States
Tess Dawber
Council for Aid to Education, United States
Part III of this report discusses the assessment in each of the six participating countries. Each chapter reviews policy context, test administration, mastery levels, score distribution and data regarding effort and engagement. This chapter discusses the CLA+ assessment in the United States, where it has a long history and has acquired strong status and recognition.
The original higher education institutions in the United States, modelled after the Oxford-Cambridge system in the United Kingdom (Miller and Rudolph, 1962[1]; Thelin, 2012[2]) to educate and train ministers, have evolved over time and become a complex, if not globally the most complex, higher education system. Currently, there are 4 360 degree-granting institutions (Snyder, de Brey and Dillow, 2019[3]) of which 2 832 are four-year and 1 528 are two-year. Bachelor’s degrees are typically awarded by public or private institutions as part of a four-year study programme. Associate’s degrees are usually two years in length and awarded through community colleges, technical colleges and vocational schools.
The denominations of “college” and “university”, although often colloquially used interchangeably within the United States, do have some differences. While there are no national standards, which is a theme within higher education in the United States, universities are typically institutions that provide both undergraduate and graduate degrees and have larger student enrolment. Colleges, on the other hand, tend to only offer undergraduate (associate’s and bachelor’s) degrees and often have fewer students than universities. Universities may designate certain programmes as colleges within the university. For example, the Colleges of Agricultural Life Sciences; Architecture, Art and Planning; Arts and Sciences; Engineering; and Human Ecology are all part of Cornell University. Both colleges and universities can be public or private institutions. And within the private institution sector, they can be categorised as non-profit or for-profit entities. A third, and much smaller, category of higher education institutions uses titles such as “institute”, “academy”, “union”, “conservatory” and “school”.
As with many other countries, there are publicly and privately funded institutions. Within the private sector, there is further division with non-profit and for-profit institutions. Public institutions are mainly funded by the state and federal governments (Ginder, Kelly-Reid and Mann, 2017[4]) and are non-profit organisations. Private institutions rely more heavily on endowments (Kaplan, 2020[5]) and, particularly for the for-profit institutions, student tuition, which can be up to 90% of the funding (Ginder, Kelly-Reid and Mann, 2017[4]). Public institutions comprise 37% (1 623 out of 4 260) of the higher education institutions within the United States (Snyder, de Brey and Dillow, 2019[3]). Within the private sector, which consists of approximately 63% of all higher education institutions, 61.5% (1 682 of 2 727) of them are non-profit (Snyder, de Brey and Dillow, 2019[3]).
As of autumn 2020, 16.7 million undergraduate students are projected to attend colleges and universities in the United States (De Brey et al., 2021[6]). Of these students, the majority (74%) are attending public institutions, 83.4% are attending four-year institutions, 57.1% are women and 38.3% are persons of colour (De Brey et al., 2021[6]). During the 2020/21 academic year, almost two million bachelor’s degrees were awarded (De Brey et al., 2021[6]).
However, despite the upward trend in college enrolment over the last two decades, college graduation rates remain relatively low within the United States. According to the National Center of Education Statistics (Hussar et al., 2020[7]), as of spring 2020, nearly 40% of students who began seeking a bachelor’s degree at a four-year institution in 2012 have yet to receive their degree. Furthermore, year-to-year retention rates vary considerably across institutions. Between 2017 and 2018, although highly selective institutions had high student retention rates of 97%, less selective and open-admissions schools retained a substantially smaller percentage (62%) of their students during this same period (Hussar et al., 2020[7]). Contrary to an oft-perpetuated notion that student retention is a “first-year” problem, student attrition remains a risk for students at all class levels, with approximately one-third of college dropouts having obtained at least three-quarters of the credits required for graduation (Mabel and Britton, 2018[8]). Although many students cite non-academic reasons such as financial difficulties, health or family obligations as the primary causes for dropping out or deferring their college education (Astin and Oseguera, 2012[9]), academic failure is also a significant factor contributing to lack of persistence and decreased retention of students in higher education.
Student attrition from higher education institutions can lead to a number of financial consequences. Students who do not obtain a bachelor’s degree tend to have poorer career outcomes, as measured by salary and employment, than their more educated peers. According to the US Bureau of Labor Statistics (2020[10]), there is an increase in median annual salary for each successive level of education completed. Similarly, unemployment varies inversely with degree attainment, meaning those who are less educated are more likely to be unemployed. Important to note is that career outcome setbacks for non-degree holders are not limited to those who have never enrolled in college. Those who were at one time enrolled in higher education and unsuccessful in completing their undergraduate education also experience career outcome setbacks.
Prior findings are mixed as to whether non-degree holders who have completed some college fare better financially than those who did not continue their education after receiving a high-school diploma (e.g. Baum (2014[11]), Giani, Attewell and Walling, (2019[12]), Shapiro et al. (2014[13])). However, there is little dispute that students who complete their bachelor’s degree fare better than those who enrol in college but never graduate (Giani, Attewell and Walling, 2019[12]). Like college graduates, college students who never complete their degree tend to face substantial financial costs of accumulating debt while forgoing monetary earnings. However, unlike degree-holding peers, college dropouts experience the burden of these costs without eventually gaining the social capital (i.e. networks of relationships) that comes with a higher education degree. In fact, this problem even extends to students who take more than the standard four years to graduate college as they accrue increased financial costs over the years with diminishing returns (Sullivan, 2010[14]).
Higher education institutions have employed numerous strategies to increase student retention and graduation rates, with mixed results. Some programmes that have shown success at increasing retention and graduation rates include “methods of inquiry” critical thinking courses (Ahuna, Tinnesz and VanZile-Tamsen, 2010[15]) and targeted study skills courses for returning students who are on academic probation due to their low grade point averages (GPAs) (Engle, Reilly and Levine, 2004[16]). Conversely, Johnson and Stage (2018[17]) reviewed universities’ use of the 10 high-impact practices for student success as identified by the Association of American Colleges and Universities: freshman seminars, core curricula, learning communities, writing courses, collaborative assignments, undergraduate research, study abroad, service learning, internships and capstone or senior projects (Kuh, 2008[18]). The quantity of practices offered on campus showed no relation to either four- or six-year graduation rates. Of the 10 practices, only internships and freshman seminars showed predictive relationships with graduation rate, both of which were weak and negative. Internships had a negative relationship with four-year graduation rate but not six-year graduation rate, indicating that internships may prolong the amount of time needed to complete all required credits while not discouraging graduation itself.
Johnson and Stage (2018[17]) suggest that the negative relationship they found between inclusion of freshman seminars and graduation rate may stem from a lack of targeted instruction. That is, schools that require freshman seminars for all students may be investing too heavily in the seminars rather than allocating their resources to students with a higher need. Indeed, Potts and Shultz (2008[19]) found no significant effect of freshman seminars on student retention when considering an entire student body, but they did find a significant positive effect on retention for students who lived off-campus and for students whose high-school profiles were below the typical standard of their school. Similarly, Engle, Riley and Levine (2004[16]) found a positive effect of targeted retention programmes for students who had performed poorly in their first or second year of college.
Interestingly, despite the noted importance of academic experiences to student integration and thus retention, little research to date has examined the role of critical thinking skills in student retention. Critical thinking skills have, however, been linked with other college and post-college outcomes. Some examples include career outcomes such as employment status and salary (Arum and Roksa, 2014[20]): Chapter 7), and educational outcomes such as graduate school enrolment (Arum and Roksa, 2014[20]; Mullen, Goyette and Soares, 2003[21]).
Given the positive findings regarding programmes and courses that target and cater to students’ specific needs, it is important to better understand methods that can be used to target students effectively. Although there is evidence that courses designed to enhance critical thinking skills can positively impact student retention (e.g. (Ahuna, Tinnesz and VanZile-Tamsen, 2010[15])), there is a relative lack of literature investigating the importance of critical thinking skills as predictors of student retention compared to more traditional predictors like high-school grade point average (HSGPA). If universities wish to introduce programming that targets critical thinking skills, it follows that students should be identified and selected based on their critical thinking proficiency.
From a theoretical perspective, Mah (2016[22]) touts the benefits of critical thinking skills, learning analytics and digital badges to student retention, suggesting that these constructs and practices not only benefit student retention individually but also interact with one another. An assessment of critical thinking skills, then, has the potential to serve different functions in promoting an increase in student retention. First, at the institution level, it has the potential to help identify students who would benefit most from targeted remediation. Second, at the student level, it can help students understand their own strengths and weaknesses and thereby seek the appropriate guidance and resources to meet their needs. This is not only for students who are not proficient in these skills. Those who are proficient can also benefit from further improvement of their skills. Third, providing feedback on critical thinking performance may increase students’ motivation to further their own critical thinking skills and thus enhance their academic engagement.
In 2006, under the Commission of the Future of Higher Education, the Voluntary System of Accountability (VSA) was established in order to provide a way to compare and report evidence of student learning at higher education institutions via the College Portrait (Jankowski et al., 2012[23]; McPherson and Shulenburger, 2006[24]; Miller, 2007[25]). The Spellings Commission, named after then-Secretary of Education Margaret Spellings, convened a panel of experts to develop a strategy to ensure that higher education was accessible and affordable, and that it adequately prepared students for the global economy. The VSA was developed by the Association of Public and Land-grant Universities (APLU) and the American Association of State Colleges and Universities (AASCU) as a way for higher education institutions to measure the requirements outlined by the Spellings Commission and avoid the growing concern that the federal government was going to mandate a single metric for institutional accessibility, affordability and accountability. The VSA provides a framework to meet these requirements (Keller and Hammang, 2008[26]). As part of the VSA initiative, assessments of student outcomes and evaluating institutional effectiveness are necessary (Liu, 2017[27]; 2011[28])
In 2008, a study funded by the Fund for the Improvement of Postsecondary Education (FIPSE) examined whether assessments of higher education students’ general education learning outcomes provided comparable information (Klein et al., 2009[29]; Steedle, Kugelmass and Nemeth, 2010[30]). Findings from this study led to the use of three assessments of generic skills – ACT’s Collegiate Assessment of Academic Proficiency (CAAP), Council for Aid to Education’s (CAE) Collegiate Learning Assessment (CLA) and Educational Testing Service’s (ETS) Measure of Academic Proficiency and Progress (MAPP) – as part of the VSA initiative for institutional accountability.
The CLA became one of the assessments that institutions could use to report student learning outcomes on the VSA’s College Portrait. Institutions administered the CLA for VSA reporting for either benchmarking or value-added as a measure of institutional growth. And although this was a partial solution to the quality assurance and accountability recommendation from the Spellings Commission, this solution was insufficient for measuring individual student learning gains of these essential college and career skills such as critical thinking, problem solving, and written communication.
Since 2002, CAE has pioneered the use of performance-based assessments for assessing students’ essential college and career readiness skills. To date, over 700 institutions, both in the United States and internationally, and over 650 000 students have participated in the CLA. The CLA was designed as an institutional measure of student’s critical thinking skills, providing cross-sectional growth estimates and norm-referenced data. In 2013, CAE transitioned to the next iteration of CLA, CLA+. CLA+ includes a selected-response section, which provides additional subscores and allows CLA+ to provide student level reliability.
The assessment is designed to be completed in approximately 90 minutes and includes an optional tutorial, a Performance Task (PT), Selected-Response Questions (SRQs), and a demographic survey. The CLA+ is administered through a secure browser that distributes the PT and 25 SRQs to each student. The assessment instruments must be administered under standardised, controlled testing conditions, with all students monitored by a proctor. In Spring 2020, CAE introduced remote proctoring as a response to the COVID-19 pandemic. Remote proctoring allowed higher education institutions to administer CLA+ to students online and proctor via web conferencing software. Results from a study measuring the difference between students who were administered CLA+ and were proctored remotely versus in-person found no significant differences between students’ PT scores and only marginally significant differences for SRQ scores (Zahner and Cortellini, 2021[31]). Students in the remote proctoring condition performed slightly better on the SRQs than those in the in-person proctoring condition.
The standard cross-sectional model for assessing institutional growth involves testing a sample of 100 or more entering students during the fall testing window (typically mid-August through early November), and then testing a sample of 100 or more exiting students during the spring testing window (typically early February through mid-May). All testing sessions require a proctor to approve students into the interface and manage the testing environment.
Test administration steps:
1. Receive welcome email from CLA+ team with instructional materials.
2. Verify testing plans.
3. Review instructional materials and complete technology testing.
4. Administer CLA+ exam to students.
5. Confirm with CAE that testing is complete.
6. Submit registrar data to confirm student’s class level.
7. Receive reports through a secure file-sharing service.
Cross-sectional results include growth estimates (in the form of effect sizes and value-added scores) and normed data such as percentile rankings. Cross-sectional reports also include information such as summary scores, subscores, and mastery levels.
If an institution only wishes to assess a single cohort or does not want institutional growth metrics, it can opt to receive mastery level results. Mastery level results include statistics only for the students tested within a specific administration; they do not include growth estimates or normed data and have less stringent sampling requirements. These results include summary scores, subscores, and mastery levels. Students do not need to test within a specific administration in order to be included in the institutional sample for this type of reporting.
Table 10.1 presents the average score and standard deviation (in parentheses) for each CLA+ score by cohort. Entering students in the United States received an average total CLA+ score of 1 060 (SD = 149), which corresponds with the Developing mastery level. Exiting students on average scored 43 points higher, with an average score of 1 103 (SD = 148), which corresponds to the Proficient mastery level. Independent samples t-tests found a small, significant difference between entering and exiting students on total CLA+ score (Table 10.2). As seen by the score differences summarised in Table 10.1 as well as the t-test results shown in Table 10.2, the increase in total CLA+ score in between class levels seems to be driven in part by the difference in the respective classes’ average performance on the PT. There was a 47-point average score increase between entering and exiting students on the PT (Cohen’s d = .28) whereas there was a 38-point average score increase between classes on the SRQ section (Cohen’s d = .21).
Total CLA+ score |
Performance Task score |
Selected-Response score |
|
---|---|---|---|
Entering (n = 50,809) |
1 060 (149) |
1 043 (168) |
1 078 (186) |
Exiting (n = 47,431) |
1 103 (148) |
1 090 (170) |
1 116 (182) |
Score difference (exiting - entering) |
+43 |
+47 |
+38 |
t |
df |
p |
Cohen's d |
|
---|---|---|---|---|
Total CLA+ score |
-44.76 |
97894 |
<.001 |
.29 |
Performance Task score |
-43.63 |
97627 |
<.001 |
.28 |
Selected-Response score |
-32.27 |
97972 |
<.001 |
.21 |
Overall distribution of CLA+ mastery levels is summarised in Table 10.3. As shown in Figure 10.1, the distributions varied between entering and exiting students. Chi-square analysis revealed these differences to be statistically significant (df = 4, χ2 = 1835.19, p < .001, Cramer’s V = .137). Overall, higher percentages of entering than exiting students fell into the non-proficient mastery levels (i.e. “Emerging” and “Developing”), and higher percentages of exiting than entering students fell into the mastery levels that meet and exceed the “Proficient” threshold (i.e. “Proficient”, “Accomplished” and “Advanced”). The most notable difference between entering and exiting students regarding mastery levels was the 9-percentage-point difference found at the Emerging level of mastery. Whereas 27% of entering students performed at this level (meaning that they lacked even basic critical thinking skills), 18% of exiting students performed at the Emerging level. The trend between entering and exiting students in mastery level distribution was further reflected in the distribution of total CLA+ scores (Figure 10.2-Figure 10.3). Although scores were normally distributed at both class levels, the distribution of exiting student scores fell slightly more than that of entering student scores.
Level |
Percentage |
---|---|
Emerging |
22.7% |
Developing |
30.4% |
Proficient |
28.5% |
Accomplished |
16.2% |
Advanced |
2.2% |
In addition to the comparison of total CLA+ scores and score distributions, PT and SRQ subscores were also analysed. On all three PT subscores – Analysis and Problem Solving (APS), Writing Effectiveness (WE) and Writing Mechanics (WM) – exiting students on average outperformed entering students. The differences between entering and exiting student scores were significant but small (Figure 10.4; Table 10.4). The same pattern held for the three SRQ subscores: Scientific and Quantitative Reasoning (SQR), Critical Reading and Evaluation (CRE) and Critique an Argument (CA). However, the difference in SRQ subscores, though significant, was negligibly small (Figure 10.5; Table 10.4).
t |
df |
p |
Cohen's d |
|
---|---|---|---|---|
APS |
-40.63 |
97024 |
<.001 |
0.25 |
WE |
-44.66 |
96903 |
<.001 |
0.28 |
WM |
-44.27 |
97401 |
<.001 |
0.27 |
SQR |
-25.31 |
97858 |
<.001 |
0.16 |
CRE |
-27.93 |
98042 |
<.001 |
0.18 |
CA |
-20.24 |
97257 |
<.001 |
0.13 |
Note: APS = Analysis and Problem Solving; WE = Writing Effectiveness; WM = Writing Mechanics; SQR = Scientific and Quantitative Reasoning; CRE = Critical Reading and Evaluation; CA = Critique an Argument
Upon completion of CLA+, all U.S. domestic students reported their perceived effort and engagement for each section of the assessment via 5-point Likert scales (Table 10.5). On the PT, entering students gave an average effort rating of 3.7 (SD = 0.9). The average rating given by exiting students was 3.7 (SD = 0.9). For the SRQ section, entering students reported an average of 3.2 points on the effort scale (SD = 1.0) and exiting students reported an average of 3.3 (SD = 1.0.) Table 10.5 summarises the distribution of self-reported effort ratings by class and section. Paired-samples t-tests showed that entering students reported spending more effort on the PT section than on the SRQ section (t(50776) = 126.61, p < .001, Cohen’s d = .54). The same was true for exiting students (t(47401) = 99.19, p < .001, Cohen’s d = .41). For both class levels, the effect size was moderate.
No effort at all |
A little effort |
A moderate amount of effort |
A lot of effort |
My best effort |
||
---|---|---|---|---|---|---|
PT |
Entering |
0.5% |
5.5% |
38.5% |
34.9% |
20.5% |
Exiting |
0.7% |
6.3% |
36.1% |
32.4% |
24.5% |
|
SRQ |
Entering |
3.3% |
17.2% |
45.8% |
23.7% |
10.0% |
Exiting |
2.8% |
14.2% |
43.0% |
25.3% |
14.7% |
Note: PT = Performance Task; SRQ = Selected-Response Questions
Compared to self-reported effort, students’ ratings of their engagement with the assessment tended to fall lower on the scale. However, the differences between students’ reported engagement with the PT and the SRQ section mirrored those previously seen with self-reported effort. The average PT engagement rating among entering students was 3.0 (SD = 1.0), and that among exiting students was 3.1 (SD = 1.0). In contrast, entering students reported an average engagement level of 2.4 (SD = 1.0) for the SRQ section, and exiting students reported an average of 2.6 (SD = 1.1). At both class levels, there was a significant difference between the two sections regarding student engagement. The effect size was moderate for both entering students (t(50776) = 117.27, p < .001, Cohen’s d = .41) and exiting students (t(47401) = 99.67, p < .001, Cohen’s d = .49). Distributions are summarised in Table 10.6.
Not at all engaging |
A little engaging |
Moderately engaging |
Very engaging |
||
---|---|---|---|---|---|
PT |
Entering |
8.2% |
19.0% |
40.1% |
26.9% |
Exiting |
8.3% |
17.5% |
37.9% |
29.2% |
|
SRQ |
Entering |
21.5% |
30.5% |
33.2% |
12.3% |
Exiting |
18.1% |
27.3% |
36.0% |
15.3% |
Note: PT = Performance Task; SRQ = Selected-Response Questions
At both class levels, there was found to be an association between self-reported effort/engagement and CLA+ performance. Broadly speaking, average scores tended to increase with each successive level of effort and engagement (Table 10.7-Table 10.8). Indeed, multiple regression analysis shows that both effort and engagement were significant predictors of PT score (Table 10.9) at both class levels. The total variation in the PT scores explained by effort and engagement was 12% for entering students and 14% for exiting students. The exception to this is self-reported engagement on the SRQ section. Although both effort and engagement emerged as significant predictors of SRQ score among exiting students, only effort emerged as a significant predictor among entering students (Table 10.10). The total variation in the SRQ scores explained by effort and engagement was lower than that found in the PT regression results. Only 6% of variation in the SRQ scores was explained by effort and engagement for both entering and exiting student results.
No effort at all |
A little effort |
A moderate amount of effort |
A lot of effort |
My best effort |
||
---|---|---|---|---|---|---|
PT |
Entering |
840 (180) |
896 (163) |
1 001 (159) |
1 080 (153) |
1 104 (159) |
Exiting |
819 (165) |
932 (161) |
1 046 (158) |
1 130 (152) |
1 150 (160) |
|
SRQ |
Entering |
904 (147) |
1 010 (179) |
1 084 (181) |
1 123 (177) |
1 122 (179) |
Exiting |
902 (148) |
1 039 (179) |
1 119 (175) |
1 157 (172) |
1 154 (179) |
Note: PT = Performance Task; SRQ = Selected-Response Questions
Not at all engaging |
A little engaging |
Moderately engaging |
Very engaging |
Extremely engaging |
||
---|---|---|---|---|---|---|
PT |
Entering |
945 (168) |
996 (167) |
1 051 (163) |
1 082 (157) |
1 096 (160) |
Exiting |
982 (177) |
1 039 (171) |
1 096 (162) |
1 131 (158) |
1 135 (160) |
|
Entering |
1 024 (180) |
1 086 (184) |
1 096 (186) |
1 103 (179) |
1 082 (184) |
|
Exiting |
1 055 (183) |
1 120 (180) |
1 136 (179) |
1 135 (179) |
1 113 (181) |
Note: PT = Performance Task; SRQ = Selected-Response Questions
Entering |
Exiting |
|||||||
---|---|---|---|---|---|---|---|---|
Variable |
B |
SE (B) |
β |
p |
B |
SE (B) |
β |
p |
Effort |
52.53 |
0.90 |
0.27 |
<.001 |
56.22 |
0.92 |
0.31 |
<.001 |
Engagement |
19.90 |
0.78 |
0.12 |
<.001 |
16.44 |
0.82 |
0.10 |
<.001 |
Entering |
Exiting |
|||||||
---|---|---|---|---|---|---|---|---|
Variable |
B |
SE (B) |
β |
p |
B |
SE (B) |
β |
p |
Effort |
47.49 |
0.99 |
0.24 |
<.001 |
46.98 |
0.97 |
0.25 |
<.001 |
Engagement |
-0.24 |
0.90 |
0.00 |
0.792 |
-2.28 |
0.91 |
-0.01 |
0.012 |
Higher education institutions in America have a long track record of resilience and innovation: perhaps the most familiar example is how colleges and universities embraced the GI bill11 and implemented the largest expansion of access to post-secondary education in the world (Olson, 1973[32]). The COVID-19 pandemic poses a new and daunting challenge, leading many educators and analysts to wonder if higher education will ever return to what it once was, more specifically, in-person or on-campus teaching and learning. Indeed, some critics of the American system (at all levels), borrowing the perhaps tired cliché about not letting a good crisis go to waste, hope the current challenge will lead to fundamental reforms. One education leader sees it as a “Sputnik-like opportunity” (Reville, 2020[33]).
Whether and how the system continues to adapt will depend on a combination of political will, economic constraints, technological possibilities and commitment to core values of teaching and learning. Meanwhile, changes are already apparent: classrooms are moving to remote or hybrid formats requiring adaptations by faculty and staff; administrators are considering alternative semester schedules to ease congestion and enable “social distancing”; and admissions offices are modifying requirements to make standardised tests optional in an effort to ease burdens on students already struggling to complete high school (or college) successfully.
If the national goal for higher education institutions is to achieve higher levels of educational attainment (Bowen, Mcpherson and Ream, 2018[34]), then the role of assessment within this context is essential. At the centre of educational attainment is retention, persistence and graduation. Given the positive findings regarding programmes and courses that target and cater to students’ specific needs, it is important to better understand methods that can be used to target students effectively. Although there is evidence that courses designed to enhance critical thinking skills can positively impact student retention (e.g. (Ahuna, Tinnesz and VanZile-Tamsen, 2010[15])), there is a relative lack of literature investigating the importance of critical thinking skills as predictors of student retention compared to more traditional predictors like high-school grade point average. If universities wish to introduce programming that targets critical thinking skills, it follows that students should be identified and selected based on their critical thinking proficiency.
Upon attaining a higher education degree, if graduates are unable to find appropriate employment, the impact is immense for students, their parents and their institutions. The most recent data from the US Department of Education reveal that many low- and middle-income families have taken on a substantial debt to finance their child’s college education (Fuller and Mitchell, 2020[35]). According to analysis by the Federal Reserve Bank of New York (2018[36]), as of December 2020, 40% of recent college graduates were underemployed – that is, they were working in jobs that typically do not require a college degree, impacting their personal financial health and that of the broader economy.
Thus, there is a need to identify and improve students’ generic skills because it is these skills that employers deem essential for career success (Capital, 2016[37]; Hart Research Associates, 2013[38]; National Association of Colleges and Employers, 2018[39]; Rios et al., 2020[40]; World Economic Forum, 2016[41]).
Since 2019, CAE has pivoted away from using CLA+ solely as a higher education accountability and quality assurance instrument. Currently, in addition to providing institutions with information about institutional value-added, CAE also makes it possible for educators to use CLA+ results as a student diagnostic to identify students’ strengths and areas of improvement as well as for longitudinal and efficacy studies. The instrument can be used to answer institutional research questions such as:
How ready are students for, and where do they need support in, their next step?
How well is the institution developing essential skills in students?
How much are students growing from year to year?
How has a new curriculum improved students’ essential skills?
Additionally, CAE has renamed the Scientific and Quantitative Reasoning (SQR) subscore of the SRQ section Data Literacy (DL) as it more accurately reflects the skills measured on the assessment. All reports from the autumn 2021 academic year will reflect this updated language. No changes to the actual construct have been made. This was solely a change in the naming of the subsection.
In 2021, CAE introduced the Next Step Platform, which incorporates the client-facing applications used throughout the assessment process, including support assistance and account management, through one login. To better engage students, technology-enhanced elements such as video stimuli and responses, simulations, and drag-and-drop options can be embedded in performance-based assessments. New reporting capabilities will allow students and educators to better understand students’ readiness for their next step.
For educators, the Next Step Platform offers a convenient way to deliver CLA+. The platform also allows custom assessments to be easily designed, developed and administered on the same platform, reducing time and effort.
Students can complete assessments within the platform, and results can be quickly provided due to enhanced artificial intelligence (AI) scoring. The Next Step Platform will also offer students an opportunity to earn micro-credentials, an evidence-based measure of real-world skills that they can share with colleges and prospective employers.
In 2020, CAE introduced the Success Skills Assessment (SSA+) as a formative assessment of students’ generic skills. Ideally, institutions would assess students using SSA+ as they enter university, and would receive the students’ score reports shortly, if not immediately, following completion of the assessment. Following the assessment of students, institutions could implement courses of study or other curricular support to help students improve their skills.
SSA+, a 60-minute assessment, is aligned to the same constructs that are measured on CLA+, but uses technology-enhanced and other items to scaffold students’ generic skills rather than just asking students to write a single essay. There is still a written portion to the SSA+ PT, scored on the same rubric as the CLA+ PT. However, the SSA+ PT is meant to be more formative than the CLA+, allowing educators to work directly with students in the classroom on improving their skills.
The impetus for this development was based on requests from CLA+ clients who wanted a shorter assessment that returned student results more quickly and used technology-enhanced and more modern item types to measure their students’ skills. The students’ written responses are scored primarily with an automated scoring engine, and technology-enhanced and selected-response items are also automatically scored.
CAE believes that using a formative assessment of generic skills like SSA+, followed by curricular support to improve these skills, and ending with a summative assessment such as CLA+ will lead to better learning outcomes for higher education students. Any higher education institutions in the United States who are interested in using an assessment to measure students’ generic skills are encouraged to reach out to CAE for more information on how to implement a testing plan for this purpose.
Educators can use students’ assessment scores and mastery levels of generic skills to help identify strengths and developmental support required for improvement. Being assessed this way is particularly valuable for those students who are most at risk of dropping out due to academic difficulty. Identifying students who might benefit from additional academic intervention early in their tenure may lead to an increase in student retention, persistence, and graduation rates. Furthermore, improving these skills would increase the likelihood that the individual student will have better higher education and post-higher education outcomes, such as higher GPA (Zahner, Ramsaran and Zahner, 2012[42]), appropriate employment, higher salary, and enrolment in a graduate programme (Zahner and James, 2016[43]; Zahner and Lehrfeld, 2018[44]).
Educators and employers clearly recognise that fact-based knowledge is no longer sufficient and that generic skills such as critical thinking, problem solving, and written communication skills are essential for success. The opportunity to improve students’ essential skills lies in identification and action. This can be further highlighted with the use of verified digital badges or a micro-credential, which is a movement that has been slowly gaining momentum (Mah, Bellin-Mularski and Ifenthaler, 2016[45]; Lemoine and Richardson, 2015[46]; Lemoine, Wilson and Richardson, 2018[47]; Rottmann and Duggan, 2021[48]). Assessments that are coupled with verified digital badges or micro-credentials provide educators with the opportunity to help students identify their strengths as well as areas where they can improve. This is fundamental to developing the critical thinkers, problem solvers, and communicators who will be essential in the future. With close and careful attention paid to students’ essential generic skills, even a small increase in the development of these skills could boost future outcomes for students, parents, institutions, and the overall economy.
[15] Ahuna, K., C. Tinnesz and C. VanZile-Tamsen (2010), ““Methods of Inquiry”: Using Critical Thinking to Retain Students”, Innovative Higher Education, Vol. 36/4, pp. 249-259, https://doi.org/10.1007/s10755-010-9173-5.
[20] Arum, R. and J. Roksa (2014), Aspiring Adults Adrift: Tentative Transitions of College Graduates, University of Chicago Press, Chicago.
[9] Astin and L. Oseguera (2012), “Pre-College and Institutional Influences on Degree Attainment”, in College Student Retention: Formula for Student Success, Rowman & Littlefield Publishers, New York.
[11] Baum, S. (2014), “Higher Education Earning Premium Value, Variation, and Trends”, Urban Institute February Issue, https://www.urban.org/research/publication/higher-education-earnings-premium-value-variation-and-trends (accessed on 5 August 2022).
[34] Bowen, W., M. Mcpherson and T. Ream (2018), “Lesson Plan: An Agenda for Change in American Higher Education”, The Review of Higher Education, Vol. 41/2.
[37] Capital, P. (2016), 2016 Workforce-Skills Preparedness Report, http://www.payscale.com/data-packages/job-skills (accessed on 28 April 2021).
[6] De Brey, C. et al. (2021), Digest of Education Statistics 2019 (NCES 2021-009), https://nces.ed.gov/pubs2021/2021009.pdf (accessed on 5 August 2022).
[16] Engle, C., N. Reilly and H. Levine (2004), “A Case Study of an Academic Retention Program”, Journal of College Student Retention: Research, Theory & Practice, Vol. 5/4, pp. 365-383, https://doi.org/10.2190/jp0w-5358-y7dj-14b2.
[36] Federal Reserve Bank (2018), The Labor Market for Recent College Graduates, https://www.newyorkfed.org/research/college-labor-market/college-labor-market_underemployment_rates.htm (accessed on 5 August 2022).
[35] Fuller, A. and J. Mitchell (2020), “Which schools leave parents with the most college loan debt?”, The Wall Street Journal, https://www.wsj.com/articles/which-schools-leave-parents-with-the-most-college-loan-debt-11606936947 (accessed on 3 December 2020).
[12] Giani, M., P. Attewell and D. Walling (2019), “The Value of an Incomplete Degree: Heterogeneity in the Labor Market Benefits of College Non-Completion”, The Journal of Higher Education, Vol. 91/4, pp. 514-539, https://doi.org/10.1080/00221546.2019.1653122.
[4] Ginder, S., J. Kelly-Reid and F. Mann (2017), “Enrollment and Employees in Postsecondary Institutions, Fall 2016; and Financial Statistics and Academic Libraries, Fiscal Year 2016 First Look (Provisional Data)”, First Look (Provisional Data) (NCES 2015-012). U.S. Department of Education. Washington, DC: National Center for Education Statistics, https://nces.ed.gov/pubs2018/2018002.pdf (accessed on 5 August 2022).
[38] Hart Research Associates (2013), “It takes more than a major: Employer priorities for college learning and student success”, Liberal Education, Vol. 99/2.
[7] Hussar, B. et al. (2020), “The condition of education 2020”, Institute of Education Sciences, Vol. 5/1, https://nces.ed.gov/pubs2020/2020144.pdf (accessed on 5 August 2022).
[23] Jankowski, N. et al. (2012), Transparency and Accountability: An Evaluation of the VSA College Portrait Pilot, A Special Report from the National Institute for Learning Outcomes Assessment for the Voluntary System of Accountability,, National Institute for Learning Outcomes Assessment, Champaign, IL, https://www.learningoutcomesassessment.org/wp-content/uploads/2019/02/VSA_Report.pdf.
[17] Johnson, S. and F. Stage (2018), “Academic Engagement and Student Success: Do High-Impact Practices Mean Higher Graduation Rates?”, The Journal of Higher Education, Vol. 89/5, pp. 753-781, https://doi.org/10.1080/00221546.2018.1441107.
[5] Kaplan, A. (2020), Voluntary Support of Education: Key Findings from Data Collected for the 2018-19 Academic Fiscal Year for US Higher Education Institutions, Council for Advancement and Support of Education, Washington, DC.
[26] Keller, C. and J. Hammang (2008), “The voluntary system of accountability for accountability and institutional assessment”, New Directions for Institutional Research, Vol. 2008/S1, pp. 39-48, https://doi.org/10.1002/ir.260.
[29] Klein, S. et al. (2009), Test Validity Study (TVS) Report, ETS Technical Report, Electronic Testing Services, Princeton, NJ, https://www.ets.org/research/policy_research_reports/publications/report/2009/iddk.
[18] Kuh, G. (2008), High-Impact Educational Practices: What They Are, Who Has Access to Them, and Why They Matter, Association of American Colleges and Universities, Washington, DC.
[46] Lemoine, P. and M. Richardson (2015), “Micro-Credentials, Nano Degrees, and Digital Badges: New Credentials for Global Higher Education.”, International Journal of Technology and Educational Marketing, Vol. 5/1, pp. 36-49.
[47] Lemoine, P., W. Wilson and M. Richardson (2018), Marketing Micro-Credentials in Global Higher Education: Innovative Disruption, IGI Global, https://doi.org/10.4018/978-1-5225-5673-2.ch007.
[27] Liu, O. (2017), “Ten Years After the Spellings Commission: From Accountability to Internal Improvement”, Educational Measurement: Issues and Practice, Vol. 36/2, pp. 34-41, https://doi.org/10.1111/emip.12139.
[28] Liu, O. (2011), “Outcomes Assessment in Higher Education: Challenges and Future Research in the Context of Voluntary System of Accountability”, Educational Measurement: Issues and Practice, Vol. 30/3, pp. 2-9, https://doi.org/10.1111/j.1745-3992.2011.00206.x.
[8] Mabel, Z. and T. Britton (2018), “Leaving late: Understanding the extent and predictors of college late departure”, Social Science Research, Vol. 69, pp. 34-51, https://doi.org/10.1016/j.ssresearch.2017.10.001.
[22] Mah, D. (2016), “Learning Analytics and Digital Badges: Potential Impact on Student Retention in Higher Education”, Technology, Knowledge and Learning, Vol. 21/3, pp. 285-305, https://doi.org/10.1007/s10758-016-9286-8.
[45] Mah, D., N. Bellin-Mularski and D. Ifenthaler (2016), Foundation of digital badges and micro-credentials: Demonstrating and recognizing knowledge and competencies, https://doi.org/10.1007/978-3-319-15425-1.
[24] McPherson, P. and D. Shulenburger (2006), Toward a Voluntary System of Accountability Program (VSA) for Public Universities and Colleges, National Association of State Universities and Land-Grant Colleges, Washington, DC.
[25] Miller, M. (2007), “Editorial: The Commission on the Future of Higher Education”, Change: The Magazine of Higher Learning, Vol. 39/1, pp. 8-9, https://doi.org/10.3200/chng.39.1.8-9.
[1] Miller, R. and F. Rudolph (1962), “The American College and University: A History”, AAUP Bulletin, Vol. 48/4, https://doi.org/10.2307/40222930.
[21] Mullen, A., K. Goyette and J. Soares (2003), “Who Goes to Graduate School? Social and Academic Correlates of Educational Continuation after College”, Sociology of Education, Vol. 76/2, pp. 143-169, https://doi.org/10.2307/3090274.
[39] National Association of Colleges and Employers (2018), Are college graduates “career ready”?, https://www.naceweb.org/career-readiness/competencies/are-college-graduates-career-ready/ (accessed on 19 February 2018).
[32] Olson, K. (1973), “The G. I. Bill and Higher Education: Success and Surprise”, American Quarterly, Vol. 25/5, pp. 596-610, https://doi.org/10.2307/2711698.
[19] Potts, G. and B. Schultz (2008), “The freshman seminar and academic success of at-risk students”, College Student Journal, Vol. 42/2, pp. 647-658.
[33] Reville, P. (2020), Coronavirus gives us an opportunity to rethink K-12 education, https://www.bostonglobe.com/2020/04/09/opinion/coronavirus-gives-us-an-opportunity-rethink-k-12-education/ (accessed on 9 April 2020).
[40] Rios, J. et al. (2020), “Identifying Critical 21st-Century Skills for Workplace Success: A Content Analysis of Job Advertisements”, Educational Researcher, Vol. 49/2, pp. 80-89, https://doi.org/10.3102/0013189x19890600.
[48] Rottmann, A. and M. Duggan (2021), Micro-credentials in higher education, IGI Global.
[13] Shapiro, D. et al. (2014), Some college, no degree: A national view of students with some college enrollment, but no completion, Signature Report No.7, National Student Clearinghouse Research Cener, Herndon, VA.
[3] Snyder, T., C. de Brey and S. Dillow (2019), “Digest of Education Statistics 2017”, National Center for Education Statistics, https://nces.ed.gov/pubs2018/2018070.pdf (accessed on 5 August 2022).
[30] Steedle, J., H. Kugelmass and A. Nemeth (2010), “What Do They Measure? Comparing Three Learning Outcomes Assessments”, Change: The Magazine of Higher Learning, Vol. 42/4, pp. 33-37, https://doi.org/10.1080/00091383.2010.490491.
[14] Sullivan, D. (2010), “The Hidden Costs of Low Four-Year Graduation Rates”, Liberal Education, Vol. 96 (Summer 2010), pp. 24-31.
[2] Thelin, J. (2012), A History of American Higher Education: Third Edition, Johns Hopkins University Press, Athens, MD.
[10] U.S. Bureau of Labor Statistics (BLS) (2020), Learn more, earn more: Education leads to higher wages, lower unemployment, https://www.bls.gov/careeroutlook/2020/data-on-display/education-pays.htm (accessed on May 2020).
[41] World Economic Forum (2016), Global Challenge Insight Report: The Future of Jobs: Employment, Skills and Workforce Strategy for the Fourth Industrial Revolution, World Economic Forum, http://www3.weforum.org/docs/WEF_Future_of_Jobs.pdf.
[31] Zahner, D. and O. Cortellini (2021), The role and effect of remote proctoring on assessment in higher education, Proceedings of the 2021 American Educational Research Association, Washington, DC.
[43] Zahner, D. and J. James (2016), Predictive validity of a critical thinking assessment of post-college outcomes [Paper presentation], 2016 Conference of the American Educational Research Association, Washington, DC,.
[44] Zahner, D. and J. Lehrfeld (2018), Employers’ and advisors’ assessments of the importance of critical thinking and written communication skills post-college [Paper presentation], 2018 Conference of the American Educational Research Association, New York, NY.
[42] Zahner, D., L. Ramsaran and D. Zahner (2012), “Comparing alternatives in the prediction of college success.”, Annual Meeting of the American Educational Research Association.
← 1. The Servicemen's Readjustment Act of 1944, often referred to as the G. I. Bill, was a law that provided a range of benefits for returning World War II veterans.