Because the research study comes from my alma mater, it pains me more than usual to point out the flaws in the methodology of this study:
- First, the most important rule in statistics when gathering data about a population is that you must take a sample from that population in order to draw any assumptions about the population from your surveyed sample. Think of it like this: If you find the average hair color of people in Norway, will that help you know the average hair color of people in Europe? No. At least, not really. Why? Because to make an estimate of the average hair color of people in Europe, we would need to take a representative sample of Europeans to then make assumptions about Europeans generally. If we take a sample of Norwegians, then we can only use that sample to make inferences about Norwegians (because a sample must be drawn from the population that you’re seeking to study). For the same reason, the UChicago study is flawed from its outset: they are aware that they are using a very unrepresentative sample of students, high schools, and then colleges attended: they only use Chicago public schools (no charter schools and no private schools, the latter of which have had particularly high grade inflation). The only conclusions that they can draw are about the students graduating from Chicago public schools, not about students in general graduating from the over 25,000 private, public, and charter high schools in the U.S. The purpose of statistics is to take out the bias of assumptions by looking at the data. They should also know that they cannot, with any academic or even moral integrity, draw any conclusions about a population that the sample does not represent.
- Second, besides the fact that their study breaks the first and most important rule (using a representative sample), why do we know that this will likely distort their data? Because the most grade inflation occurs in the best schools — and the worst schools have actually experienced grade deflation. So their study did not pick up the massive grade inflation in private schools that has increasingly made grades less predictive relative to test scores.
- Thirdly, although grades have become less predictive of success in college, they are — as a single metric — still better than SAT/ACT scores as a single metric. But that’s irrelevant. No one would advocate for solely using SAT/ACT scores for admissions. Similarly, it should seem equally surprising that anyone would advocate solely using grades as a stand-alone metric when the addition of SAT/ACT scores have been shown (from studies on millions — not just roughly 50,000 from an unrepresentative sample — of studies over decades) to improve predicting success in college. In the regression model that they used, they added grades first and then test scores — so, yes, with that order test scores would add little predictive value above grades (especially given their unrepresentative sample). But, if they had added test scores to the model first and then added grades second, the addition of grades would have added little predictive value about test scores alone. So the way that they reported their “findings” heavily distorted the predictive capacity of test scores.
- Fourthly, this study was conducted with data from 13-16 years ago. Even if they had included private schools, they would not have fully captured the corrosive effect of grade inflation since then (and thus the increasing importance of SAT/ACT scores).