With college ratings growing in significance for both consumers and policymakers, we asked our contributors: What criteria should weigh most heavily in college and university ratings? How should the department hold institutions accountable for these variables?
What’s the right way to judge our institutions of higher education? Join the conversation: You’re seated at the Field Day blog round table.
Analyst, National College Access Network
Conflicting views on college ratings show the complexity of developing a system that is useful for policymakers and consumers. These groups’ needs are different when it comes to ratings, but both groups require current, complete, and comprehensible data for promoting accountability and well-informed consumer decisions.
When developing ratings, policymakers should keep institutions of higher education accountable for the aims of the programs from which those institutions receive money. For Pell Grants, that means spurring high completion rates among grant recipients—many of whom are low-income and first-generation college students. For loan programs, that means graduating students with an education that helps them pay back taxpayers’ investment in their degree. Accordingly, policymakers should consider loan default rates and students’ 24-month post-graduation incomes to compare workforce outcomes by institution.
For consumers’ rating system needs, I endorse my colleague Carrie Warick’s suggestions for a system that includes information on institutions’ net price, admissions, completion rates, and average student loan debt and defaults, all disaggregated by institutional and student characteristics. Students need better data to make critical financial and professional decisions about matriculation.
Both groups’ needs for better data could be addressed by a student unit record system, which has been proposed by the New America Foundation and others. The imperfect data available underscores the need for a more complete collection of higher education information. The Integrated Postsecondary Education Data System only reports graduation rates for first-time, full-time students, ignoring growing numbers of returning and part-time students. Better data will allow policymakers, consumers, and researchers to better understand key questions about institutional outcomes.
Whether used for public information or accountability, ratings will only be as good as the data on which they are based. Students and institutions deserve the best basis on which to rate and be rated.
School Counselor, Fairfax County Public Schools
This past summer, I worked as a student advocate on a week-long trip to a mid-sized, mid-Atlantic university with a group of about 60 first-generation college students. As we met with admissions representatives and university advisers, they told us about their national rankings—intermingled with a few, hard facts about five-year graduation rates, student financial aid packages, and admissions requirements. While Princeton Review’s top ranking for dining hall food is interesting, it is also subjective. I rarely hear students talk about lifestyle rankings like “best dorms” or “best food,” though I know reputations matter for students and their families when deciding where to apply.
At the beginning of the college search, I see students impacted by parents who push for elite and top-ranked schools. With some elite colleges and universities boasting admission rates of between 5 and 8 percent, I’ve seen students who demonstrate an interest in this type of setting applying to 10 to 20 colleges to hedge their bets with admission. Rankings strongly influence students’ application lists, but once financial aid packages are parsed out and an enrollment deposit needs to be paid, the conversation often swings, instead, to focus on best fit. University rankings contribute only to perceived prestige, blurring reality for students and parents, especially when rankings are based on data like a university’s endowment or applicant SAT scores.
For this reason, whenever I meet with admissions folks, I always ask not about prestige rankings, but about student financial aid, first-year retention, and four-year graduation rates. These data points shed better light on the student experience at a particular institution. Students need to understand the financial and academic realities of attending college. Getting into college is only one part of the postsecondary equation—being able to pay and stay on track academically are of premium importance.
Director of Policy and Federal Relations, National Association of Student Financial Aid Administrators
In developing college and university ratings systems, focus must be placed on criteria that evaluate institutions based on their outcomes relative to their unique student populations.
Simply grouping an institution with peers defined by traditional sector or geographic region does not give an accurate expectation of that institution’s outcomes. Ratings systems need to be able to evaluate institutions relative to the demographics of students they serve. This is what’s commonly referred to as an “input-adjusted” metric or evaluation.
Input adjustment involves examining outcomes while controlling for key factors so that valid comparisons can be made among the outcomes of different institutions. Predicted graduation rate is an excellent example of an input-adjusted metric. A calculation could be done using student demographic information—socioeconomic status or race, for instance—to determine the institution’s predicted graduation rate.
If such a calculation were to be incorporated into President Obama’s proposed Postsecondary Institutional Ratings System, the predicted rate could serve as that school’s expected benchmark. The U.S. Department of Education could then evaluate the institution based on how close their actual graduation rate is to their predicted rate, allowing for an “apples to apples” comparison that would otherwise be impossible. As with most input-adjusted metrics, this would incentivize schools to improve rates but would not unfairly penalize them or inadvertently encourage them to stop admitting at-risk students.
Institutions within the same sector and state or with similar missions can vary widely, in terms of the characteristics of their students and programs. Given the differences that can exist even within broad categories of institutions, colleges and universities must be evaluated based on how well they serve their own unique population of students. Only then will schools’ ratings accurately measure the outcomes they produce.
What do you think? Engage these bloggers or share your own perspective on higher education rankings and ratings in the comments below.