Monday, I’ll be unveiling the 2020 Edu-Scholar Public Influence Rankings, recognizing the 200 university-based scholars who had the biggest influence on educational practice and policy last year. Today, I want to run through the methodology used to generate those rankings.
Given that more than 20,000 university-based faculty in the U.S. are researching education, simply making it onto the Edu-Scholar list is an accomplishment in its own right. The list is comprised of university-based scholars who focus primarily on educational questions (“university-based” meaning a formal university affiliation). Scholars who do not have a formal affiliation on a university website are ineligible.
The 150 finishers from last year automatically qualified for a spot in this year’s Top 200, so long as they accumulated at least 10 “active points” in last year’s scoring. (Active points are those reflecting current activity, so they include all categories except Google Scholar and number of books published.) The automatic qualifiers were then augmented by “at-large” additions chosen by the RHSU Selection Committee, a disciplinarily, methodologically, and ideologically diverse group of accomplished scholars. (All committee members had automatically qualified for this year’s rankings.)
I’m indebted to the RHSU Selection Committee for its assistance and want to acknowledge the 2020 members: Camilla P. Benbow (Vanderbilt), Jeanne Brooks-Gunn (Columbia), Linda Darling-Hammond (Stanford), Susan Dynarski (U. Michigan), Donna Ford (Ohio State), Dan Goldhaber (U. Washington), Sara Goldrick-Rab (Temple), Jay P. Greene (U. Arkansas), Eric Hanushek (Stanford), Douglas N. Harris (Tulane), Shaun Harper (USC), Jeffrey R. Henig (Columbia), Robert Kelchen (Seton Hall), Helen F. Ladd (Duke), Gloria Ladson-Billings (U. Wisconsin), Marc Lamont Hill (Temple), Bridget Terry Long (Harvard), Pedro Noguera (UCLA), Robert C. Pianta (U. Virginia), Jonathan Plucker (Johns Hopkins), Stephen W. Raudenbush (U. Chicago), Barbara Schneider (Michigan State), Marcelo Suarez-Orozco (UCLA), Carol Tomlinson (U. Virginia), Jacob L. Vigdor (U. Washington), Kevin G. Welner (CU Boulder), Martin West (Harvard), Yong Zhao (U. Kansas), and Jonathan Zimmerman (U. Penn).
OK, so that’s how the Top 200 list was compiled. How were the actual rankings determined? Each scholar was scored in nine categories, yielding a maximum possible score of 200. Scores are calculated as follows:
Google Scholar Score: This figure gauges the number of articles, books, or papers a scholar has authored that are widely cited. A useful, popular way to measure the breadth and impact of a scholar’s work is to tally works in descending order of how often each is cited and then identify the point at which the number of oft-cited works exceeds the cite count for the least-frequently cited. (This is known in the field as a scholar’s “h-index.”) For instance, a scholar who had 20 works that were each cited at least 20 times, but whose 21st most-frequently cited work was cited just 10 times, would score a 20. The measure recognizes that bodies of scholarship matter greatly for influencing how important questions are understood and discussed. The search was conducted using the advanced search “author” filter in Google Scholar. For those scholars who have created a Google Scholar account, their h-index was available at a glance. For those scholars without a Google Scholar account, a hand search was used to calculate their score and cull out works by other, similarly named, individuals. While Google Scholar is less precise than more specialized citation databases, it has the virtue of being multidisciplinary and publicly accessible. Points were capped at 50. (This search was conducted on Dec. 19.)
Book Points: A search on Amazon tallied the number of books a scholar has authored, co-authored, or edited. Scholars received 2 points for a single-authored book, 1 point for a co-authored book in which they were the lead author, a half-point for co-authored books in which they were not the lead author, and a half-point for any edited volume. The search was conducted using an “Advanced Books Search” for the scholar’s first and last name. (On a few occasions, a middle initial or name was used to avoid duplication with authors who had the same name, e.g., “David Cohen” became “David K. Cohen.”) The search only encompassed “Printed Books” (one of several searchable formats) so as to avoid double-counting books available in other formats. This means that books released only as e-books are omitted. To date, however, few scholars on this list pen books that are published solely as e-books. “Out of print” volumes were excluded, as were reports, commissioned studies, and special editions of magazines or journals. This measure reflects the conviction that the visibility, packaging, and permanence of books allows them to play an outsized role in influencing policy and practice. Book points were capped at 20. (This search was conducted on Dec. 16.)
Highest Amazon Ranking: This reflects the scholar’s highest-ranked book on Amazon. The highest-ranked book was subtracted from 400,000 and the result was divided by 20,000 to yield a maximum score of 20. (In other words, a scholar’s best book had to rank in Amazon’s top 400,000 to be awarded points in this category.) The nature of Amazon’s ranking algorithm means that this score can be volatile and favors more recent sales. The result is an imperfect measure but one that conveys real information about whether a scholar has penned a book that is influencing contemporary discussion. (This search was conducted on Dec. 16.)
Syllabus Points: This seeks to measure a scholar’s long-term academic impact on what is being read by the rising generation of university students. This metric was scored using OpenSyllabusProject.org, the most comprehensive database of syllabi in existence. It houses over 6 million syllabi from across American, British, Canadian, and Australian universities. A search of the database was used to identify each scholar’s top-ranked text. The score reflects the number of times that text appeared on syllabi, with the tally then divided by 25. The score was capped at 10 points. (This search was conducted on Dec. 17-18.)
Education Press Mentions: This measures the total number of times the scholar was quoted or mentioned in Education Week, the Chronicle of Higher Education, or Inside Higher Education during 2019. Searches were conducted using each scholar’s first and last name. If applicable, searches included common diminutives; they were also conducted both with and without middle initials. The number of appearances in the Chronicle and Inside Higher Ed were averaged, and that number was added to the number of times a scholar appeared in Education Week. (This was done to avoid overweighting higher education.) The resulting figure was multiplied by two, with total Ed Press points then capped at 30. (Education Week was searched on Dec. 18, Inside Higher Ed on Dec. 16, and the Chronicle on Dec. 17.)
Web Mentions: This reflects the number of times a scholar was referenced, quoted, or otherwise mentioned online in 2019. The intent is to use a “wisdom of crowds” metric to gauge a scholar’s influence on the public discourse last year. The search was conducted using Google. The search terms were each scholar’s name and university affiliation (e.g., “Bill Smith” and “Rutgers University”). Using affiliation served a dual purpose: It avoids confusion due to common names and increases the likelihood that mentions are related to university-affiliated activity. Variations of a scholar’s name (such as common diminutives and middle initials) were included in the results, if applicable. Points were calculated by dividing total mentions by 30. Scores were capped at 25. (This search was conducted on Dec. 18.)
Newspaper Mentions: A Lexis Nexis search was used to determine the number of times a scholar was quoted or mentioned in U.S. newspapers. Again, searches used a scholar’s name and affiliation; diminutives and middle initials, if applicable, were included in the results. To avoid double counting, the scores do not include any mentions from Education Week, the Chronicle of Higher Education, or Inside Higher Ed. Points were capped at 30. (The search was conducted on Dec. 17.)
Congressional Record Mentions: A simple name search in the Congressional Record for 2019 determined whether a scholar was referenced by a member of Congress. Qualifying scholars received 5 points. (This search was conducted on Dec. 18.)
Twitter Score: Since Kred and Klout no longer score individual twitter accounts, this year Followerwonk’s “Social Authority” score was used. Followerwonk scores each Twitter account on a scale of 0-100 based on the retweet rate of the user’s few hundred most recent tweets, with an emphasis placed on more recent tweets while accounting for other user-specific variables (such as follower count). While I’m highly ambivalent about the role played by social media, it’s indisputable that many public scholars exert significant influence via their social-media activity—and the lion’s share of this activity plays out on Twitter. Each score was divided by 10, yielding a maximum score of 10. (This search was conducted on Dec. 17.)
There are obviously lots of provisos when it comes to the Edu-Scholar results. Different disciplines approach books and articles differently. Senior scholars have had more opportunity to build a substantial body of work and influence (for what it’s worth, the results unapologetically favor sustained accomplishment). And readers may care more for some categories than others. That’s all well and good. The point is to spur discussion about the nature of constructive public influence: who’s doing it, how valuable it is, and how to gauge a scholar’s contribution.
A few notes regarding questions that arise every year:
• First, there are some academics that dabble (quite successfully) in education but for whom education is only a sideline. They are not included in these rankings. For a scholar to be included, education must constitute a substantial slice of their scholarship. This helps ensure that the rankings serve as something of an apples-to-apples comparison.
• Second, scholars sometimes change institutions in the course of a year. My policy is straightforward: For the categories where affiliation is used, searches are conducted using a scholar’s year-end affiliation. This avoids concerns about double-counting and reduces the burden on my overworked RAs. Scholars do get dinged a bit in the year they move. But that’s life.
• Third, media attention or web mentions resulting from a scholar’s legal issues or university disciplinary proceedings are not included.
• Fourth, it goes without saying that tomorrow’s list represents only a sliver of the faculty doing this work. For those interested in scoring additional scholars, it’s a straightforward task to do so using the scoring rubric enumerated above. Indeed, the exercise was designed so that anyone can generate a comparable rating for a given scholar in a half hour or less.
• Last, this is an admittedly imperfect and evolving exercise. Questions and suggestions are always welcome. And, if scholars would like to have their names listed differently or have their discipline categorized differently, I’m happy to be as responsive as I can within the bounds of consistency.
Finally, a note of thanks: For the hard work of coordinating the selection committee, finalizing the 2020 list, and then spending dozens of hours crunching and double-checking all of these data for 200 scholars, I owe an enormous shoutout to my gifted, diligent, and wholly remarkable research assistants RJ Martin, Matt Rice, and Hannah Warren.
Frederick Hess is director of education policy studies at AEI and an executive editor at Education Next.
This post originally appeared on Rick Hess Straight Up.