SAT, ACT Scores Alone Not Enough to Predict First-Year College Success, Says New Study
Posted By Derek Johnson on July 15, 2016 at 7:27 am
An incoming college freshman’s high school GPA provides more insight into first-year academic success than popular college admissions tests such as the SAT and ACT, according to a National Association of College Admissions Counseling survey of 400 colleges and universities.
The survey also found that slightly more than half (51 percent) of these schools conduct what are known as predictive validity tests—reviews to determine how closely their various admissions criteria such as testing, class rank, recommendations and writing ability correlate to college success. “Overall, it is clear that high school grades are by far the most significant predictor of college academic achievement,” the NACAC authors conclude in the study.
However, responses from participating schools also revealed that the SAT and ACT and other tests still made “a significant contribution to the ability to predict college academic performance.”
The authors advise caution in drawing broad conclusions on college success, noting that there is little uniformity in the way each school conducts its review process. That’s why they advise universities to conduct regular validity studies that reflect their individual mission, criteria and target populations. “[Validity studies] can, for instance, help create appropriate targeting in the recruitment of new students and allocation of financial aid funds (both need- and merit-based). In general, [they] can make a strong contribution to strategic planning for the institution as a whole,” the authors note.
Pieces of a college success puzzle
The survey results on college success come during a time when a small but growing minority of schools has begun to re-evaluate the role and use of the SAT and ACT and other tests in college admissions. In 2014, a Washington Post analysis found that SAT test-taking had declined—in many cases by more than 20 percent—in 29 states since 2006, while 3 states saw a decline in students taking the ACT. There are nearly 200 universities that have made taking the exams optional or not necessary for college admissions.
In 2016, The College Board, which develops the SAT, unveiled a revamped version of the exam in response to what President and CEO David Coleman said were concerns that the old test was not doing enough to prepare students for college. “Admissions officers and counselors have said they find the data from admissions exams useful, but are concerned that these exams have become disconnected from the work of high school classrooms and surrounded by costly test preparation,” Coleman said during an interview with CNN.
John Barnshaw, senior higher education analyst for the American Association of University Professors, said the NACAC survey results are not surprising and should not be considered a knock against college admissions officers factoring the SAT or ACT into their decisions. “The question is: does the SAT and ACT give you something that GPA alone doesn’t? For the most part, the answer is yes,” Barnshaw said.
He pointed to research suggesting that when paired, high school GPA and college admissions test scores can give schools a more accurate overall prediction of a student’s likelihood of success than either metric alone. Follow-up research on schools that have gone “test-optional” or eliminated use of the tests entirely have shown mixed results, in some cases boosting the school’s selectivity but decreasing its diversity, he said.
Rather than taking an all or nothing position about the value of college admissions exams, Barnshaw recommends viewing them as one of many pieces of a puzzle that help universities target and identify students who can succeed at their institution. “If you tell me there’s a student and he’s made up of 100 puzzle pieces, I want to have as many pieces as I can because it gives me a more composite view of the student,” Barnshaw said.
What do admissions exams measure?
In recent years, critics increasingly have questioned whether admissions exams favor those from different socioeconomic backgrounds, particularly students from wealthier families who can afford expensive tutors. Michelle Hernandez, an author of multiple books on the college admissions process, took to the pages of The New York Times to argue for replacing the traditional SAT and ACT exams with subject tests and high school AP/IB course results.
“The majority of students applying to elite colleges spend hundreds of hours doing SAT/ACT prep when they could be pursuing scholarly activities. Many New York City families will spend over $20,000 on SAT prep and top tutors charge over $600 an hour,” writes Hernandez. “SAT/ACT are mostly used to turn away applicants from overrepresented backgrounds and as such are grossly unfair.”
Others have complained of a racial bias. In a 2015 study by the University of California, Berkeley, author Saul Geiser looked at test results from 1.1 million California residents between 1994 through 2011 and found that race was the predominant differentiator, even when economic status was factored in. “Rather than declining in salience, race and ethnicity are now more important than either family income or parental education in accounting for test score differences,” Geiser writes.
The College Board has pushed back on this idea, calling it a “rumor” and arguing that a student’s high school and course-taking decisions explain much of the disparity. “While SAT scores may be correlated with socioeconomic status (parental income and education), correlation does not mean two phenomena are causally related (e.g., parental income causes students to do well on the SAT),” writes Lynn Letukas, associate research scientist at The College Board. “Students who come from lower socioeconomic statuses can and have done well on the SAT. Generally, students do well on the SAT because they are exposed to and apply knowledge gained in rigorous course material in high school and take rigorous core courses.”
When it comes to charges that SAT questions are biased toward certain racial or economic groups, Letukas responds that The College Board rigorously tests each question to see whether it unfairly advantages one group over another or if there are noticeable disparities in one group answering correctly over others. When there are differences, she claims they are often the result of external factors that the SAT cannot control for.
“These differences in scores on standardized tests do not occur as a result of bias of the test itself, but rather, external factors that play a role in quality education and ultimately student achievement, such as access to rigorous course work, books and stable peer groups, family support and resources, and academic preparation,” Letukas writes.