Describes a method to provide an independent, community-sourced set of best practice criteria with which to assess global university rankings and to identify the extent to which a sample of six rankings, Academic Ranking of World Universities (ARWU), CWTS Leiden, QS World University Rankings (QS WUR), Times Higher Education World University Rankings (THE WUR), U-Multirank, and US News & World Report Best Global Universities, met those criteria. The criteria fell into four categories: good governance, transparency, measure what matters, and rigour. The relative strengths and weaknesses of each ranking were compared. Overall, the rankings assessed fell short of all criteria, with greatest strengths in the area of transparency and greatest weaknesses in the area of measuring what matters to the communities they were ranking. The ranking that most closely met the criteria was CWTS Leiden. Scoring poorly across all the criteria were the THE WUR and US News rankings. Suggestions for developing the ranker rating method are described.
Global university rankings are now an established part of the global higher education landscape. Students use them to help select where to study, faculty use them to select where to work, universities use them to market themselves, funders use them to select who to fund, and governments use them to set their own ambitions. While the international research management community are not always the ones in their institutions that deal directly with the global university ranking agencies, they are one of the groups that feel their effect most strongly. This might be through their university’s exclusion from accessing studentship funding sources based on its ranking position; through requests to collect, validate, and optimise the data submitted; or through calls to implement strategies that may lead to better ranking outcomes. At the same time as having to work within an environment influenced by university rankings, the research management community are acutely aware of, and concerned about, the perceived invalidity of the approaches they use.
For this reason, the International Network of Research Management Societies (INORMS) Research Evaluation Working Group (
The practice of ranking universities goes back to the early twentieth century when informal lists of US universities were occasionally published. The first formal and significant ranking, however, was the US News America’s Best Colleges published from 1983 (
The first international, but not global, ranking was published by Asiaweek in 1999 and 2000 (
The year 2004 saw the appearance of the Ranking Web of Universities, or Webometrics rankings, which at first measured only web activity, and the World University Rankings published by Times Higher Education Supplement (THES) and the QS graduate recruitment firm. Critical accounts of the THES-QS world rankings and their successors can be found in Holmes (
In recent years, media and academic interest has shifted to global rankings which have been severely criticised on several grounds within the international research community. General critiques are offered by Usher (
Other texts claim that rankings promote global elitism by encouraging a shift of investment towards highly ranked universities at the expense of others (
Others have presented evidence that rankings encourage universities to forget their social role or to lose interest in local or regional problems. Lee, Vance, Stensaker, and Ghosh (
A number of writers have discussed methodological and technical issues. Turner (
The impact of rankings on university policy has been discussed by Docampo, Egret and Cram (
Aguillo, Bar-Ilan, Levene and Ortega (
Piro and Svertsen (
Although the critiques of global rankings are wide-ranging, they have not had much influence on higher education leaders. University administrators have sometimes expressed reservations about rankings but in general have been willing to participate, occasionally breaking ranks if their institutions fall too much (
Some scholars believe that the defects of global rankings far outweigh any possible benefits. Adler and Harzing (
It must be noted that the academic literature does include attempts to justify the role and methodology of rankings. Sowter (
There are others who find some merit in the rankings. Wildavsky (
It must be said, however, that some of these studies also observe that the characterisation of a ‘top’ university as defined by the rankings, and the pursuit of a better ranking position based on developing such characteristics, might not always be locally relevant or ultimately beneficial.
There have been attempts to construct rankings that avoid the various problems and defects that have been identified. Waltman et al. (
Another attempt to reform international rankings was the production of the Berlin Principles on Ranking of Higher Education Institutions in 2006 (
The Centre for Science and Technology Studies (CWTS) in Leiden, home of the Leiden Ranking and birthplace of the Leiden Manifesto on the responsible use of research metrics (
The existence of such principles is a welcome attempt to provide some best practice guidance for the design and use of university rankings. However, the fact that they were influenced and/or developed by university rankers themselves could be seen to affect their neutrality. It is also concerning that the only body currently providing any assessment of university rankers is one where rankers occupy five out of eleven seats on the Executive Committee (
It was against this background that the INORMS Research Evaluation Working Group sought to both provide an independent, community-sourced set of best practice criteria against which to assess the global university rankings and then to identify the extent to which a sample of rankings met those criteria.
In parallel with the INORMS REWG’s work to rate the global university rankings, the group also developed a framework for responsible research evaluation, called SCOPE (
The ‘S’ of SCOPE states that prior to any evaluation attempt there needs to be a clear articulation of what is valued about the entity under evaluation from the perspective of the evaluator and the evaluated. To this end the group undertook a literature search to develop a draft set of best practice criteria for fair and responsible university rankings. These were circulated to the international research evaluation community for comment via the INORMS REWG, LIS-Bibliometrics, and INORMS member organisation circulation lists such as that of the UK Association of Research Mangers and Administrators (ARMA) Research Evaluation Special Interest Group as well as via various international research management conferences. Responses were received from a broad cross-section of the community: universities, academics, publishers, and ranking agencies, which were then worked into the final set of criteria (see section 4). The criteria were grouped into four common themes: good governance, transparency, measure what matters, and rigour.
It could be argued that some of the criteria are challenging to meet, especially for some of the commercial ranking agencies, for example, around conflicts of interest. However, it was felt to be important to remain true to the values of the community even where they were aspirational. As noted by Gadd and Holmes (
The ‘C’ of the SCOPE framework states the importance of considering the evaluative context - why and who you are evaluating - prior to the evaluation. The purpose of this evaluation was to highlight the extent to which various ranking agencies adhered to the best practice expectations of the wider research evaluation community, to expose their relative strengths and weaknesses, with the ultimate purpose of incentivising them to address any deficiencies. By clarifying the context, the group moved away from early thoughts of ‘ranking the rankings’, recognising that this might only lead to self-promotion by the top-most ranked, rather than behaviour change.
As the work was undertaken by volunteers, it was not possible to assess all the global university rankings, so it was decided to test the model on six of the largest and most influential university rankings to provide a proof-of-concept. This group was selected by consulting with members of the INORMS REWG as to the most frequently used rankings in their region, and is not a reflection on their quality. The final list included
Having established our values and context, the SCOPE framework’s ‘O’ - options for evaluating - were then considered. To run the assessment, the criteria collected at the ‘values’ stage were translated into assessable indicators that were felt to be suitable proxies for the criteria being assessed. As the group were seeking to assess qualities rather than quantities, it was felt to be important to provide assessors with the opportunity to provide qualitative feedback in the form of free-text comments, as well as scores on a three-point scale according to whether the ranker fully met (2 marks), partially met (1 mark), or failed to meet the set criteria (0 marks).
To ensure transparency and mitigate against bias, twelve international experts were identified and invited by members of the INORMS REWG to provide a review of one ranking agency. Due to the pandemic only eight were able to provide a rating.
INORMS REWG members also undertook evaluations, and, in line with the SCOPE principle of ‘evaluating with the evaluated,’ each ranker was also invited to provide a self-assessment in line with the community criteria. Between one and four reviews were received for each ranking. Only one ranking agency, CWTS Leiden, accepted the offer to self-assess, providing free-text comments only.
The reviews were then forwarded to a senior expert reviewer, Richard Holmes, author of the University Ranking Watch blog (
The ‘P’ of the SCOPE framework represents ‘probe’ and requires that any evaluative approach is examined for discriminatory effects, gaming potential and unintended consequences. We observed some criteria where rankings might be disadvantaged for good practice, for example where a ranking did not use surveys and so could not score. This led us to introduce a ‘Not Applicable’ category to ensure they would not be penalised.
It was also thought to be important that we did not replicate the rankings’ practice of placing multi-faceted entities on a single scale labelled ‘top’. Not only would this fail to express the relative strengths and weaknesses of each ranking, but it would give one ranking agency ‘boasting rights’ which would run counter to what we were trying to achieve.
The ‘E’ of SCOPE invites assessors to both evaluate and evaluate their evaluation. The ranker assessment generated many learning points discussed in section 5 below which fed into recommendations for the revision of the ranking assessment tool.
The full set of attributed ranking reviews and the final calibrated review have been made openly available (
Review volume and reliability.
UNIVERSITY RANKING | NUMBER OF REVIEWS (INCLUDING CALIBRATION) | INTRA-CLASS CORRELATION CO-EFFICIENT |
---|---|---|
ARWU | 5 | 0.662 |
CWTS Leiden | 4 | 0.862 |
QS | 4 | 0.604 |
THE WUR | 3 | 0.725 |
U-Multirank | 2 | 0.663 |
US World News | 2 | 0.725 |
The five key expectations of rankers with regards to good governance were that they engaged with the ranked, were self-improving, declared conflicts of interest, were open to correction and dealt with gaming. The full criteria and indicators are listed in
Criteria and indicators - Good Governance.
A | CRITERIA: GOOD GOVERNANCE |
---|---|
A1 | Engage with the ranked. Has a clear mechanism for engaging with both the academic faculty at ranked institutions and their senior managers, for example, through an independent international academic advisory board, or other external audit mechanisms. |
A1.1 | Does the ranker have an independent international academic advisory board, that is transparent and representative? |
A1.2 | Does the ranker have other mechanisms by which they engage with the ranked, e.g, summits, non-transparent consultations, etc. |
A2 | Self-improving. Regularly applies measures of quality assurance to their ranking processes. |
A2.1 | Is there evidence that they are identifying problems with their own methodologies, and improving them? |
A3 | Declare any conflict of interests. Provides a declaration of potential conflicts of interest as well as how they actively manage those conflicts. |
A3.1 | Does the ranking make any reference to conflicts of interest on its web site? |
A3.2 | If yes, does the declaration outline how they actively manage those conflicts of interests. |
A3.3 | Does the ranking avoid selling services or data to support HEIs in improving their ranking position? |
A3.4 | If so, are the potential conflicts of interest here declared. |
A4 | Open to correction. Data and indicators should be made available in a way that errors and faults can be easily corrected. Any adjustments that are made to the original data and indicators should be clearly indicated. |
A4.1 | Do HEIs get a chance to check the data on themselves before being used for ranking purposes? |
A4.2 | Is there a clear line of communication through which ranked organisations can seek to correct any errors? |
A4.3 | Are corrected errors clearly indicated as such? |
A5 | Deal with gaming. Has a published statement about what constitutes inappropriate manipulation of data submitted for ranking and what measures will be taken to combat this. |
A5.1 | Does the ranking have a published statement about what constitutes inappropriate manipulation of data submitted for ranking? |
A5.2 | Does the statement outline what measures will be taken to combat this? |
A5.3 | Are those measures appropriate? |
One of the SCOPE principles is to evaluate with the evaluated, and the community felt that having continued engagement with both the faculty and leadership of organisations that they ranked was an important activity. The rankings tended to score well here with most having advisory boards, and all engaging in some form of outreach activity.
One of the biggest concerns about the rankings is their methodological imperfections. This question sought to highlight that ongoing improvement was an essential activity for ranking agencies. Again, all rankers either fully or partially met this criterion.
There was a belief amongst the community that ranking agencies should remain independent in order to fairly rank universities. As such, where there were conflicts of interest, i.e., where rankers sold their data or provided consultancy services to institutions with the ability to pay for it, this should be declared. No ranker fully met these expectations, and all received at least one zero in this section.
The community felt that an important aspect of good governance was that any errors drawn to ranking agencies’ attention should be corrected and clearly indicated as such. Where data was drawn entirely from third parties it was felt that this criterion was not applicable. In all other cases, HEIs were given some opportunity to check the data prior to the ranking being compiled. In most cases there was some line of communication by which HEIs could notify ranking agencies of errors, but only CWTS achieved full marks for clearly listing corrected errors.
The rewards associated with a high ranking position are such that ‘gaming’ is a regular feature (
Transparency was very important to the community with many respondents making reference to the ‘black box’ nature of many rankings’ approaches. The five expectations of rankers here were that they had transparent aims, methods, data sources, open data and financial transparency. The full criteria and indicators are listed in
Criteria and indicators - Transparency.
B | CRITERIA: TRANSPARENCY |
---|---|
B1 | Transparent aims. States clearly the purpose of the ranking, what it seeks to measure, and their target groups. |
B1.1 | Does the agency clearly state the ranking’s purpose, what it seeks to measure, and its target groups? |
B2 | Transparent methods. Publishes full details of their ranking methodology, so that given the data a third party could replicate the results. |
B2.1 | Does the agency publish full details of their ranking methods? Including weightings, surveys, recruitment, etc. |
B2.2 | Does the ranking’s website provide clear definitions of all the indicators used? E.g., what constitutes a publication. |
B2.3 | Does the ranking’s website clearly state how universities are defined and whether they include off-shore campuses, and teaching hospitals in their definition. (You may wish to check a known University with off-shore campus and teaching hospital, e.g., University of Nottingham) |
B2.4 | Could a third party with access to the data replicate the results? |
B3 | Transparent data availability. Provides detailed descriptions of the data sources being used, inclusion and exclusion parameters, date data snapshots were taken, and so on. |
B3.1 | Has the agency described in detail the sources of the data used to calculate the rankings? |
B3.2 | Is the data described fully, i.e. inclusion and exclusion parameters, date data snapshots were taken, format of the data, the quantity of data (for example, the number of records and fields in each table), the identities of the fields, and any other surface features? |
B3.3 | Does the agency provide clear opportunities for errors to be corrected? |
B4 | Open data. Makes all data on which the ranking is based available in an open standard non-proprietary format and, where possible, use open standard definitions and classifications (e.g. for subjects, publication types, etc.) to aid interoperability and comparability, and so that those being evaluated can verify the data and analysis. |
B4.1 | Does the agency provide access to the data to all institutions/researchers being ranked? |
B4.2 | Does the agency provide access to the data to anybody? |
B4.3 | If so, is the data available in a non-proprietary format? |
B5 | Financially transparent. Publishes details of all sources of income from consultancy services, training, events, advertising, and so on including financial outgoings, e.g. sponsorships. |
B5.1 | Does the agency publish all details of income sources arising from consultancy services, training, events, advertising, etc. it may obtain from the institutions/researchers being ranked? |
All rankers were either fully or partially transparent about the aims of their ranking and its target groups. Of course, transparency about their aims is not the same as successfully meeting them.
The requirement of transparent methods was particularly important to the research management community, as many are asked to reverse engineer their institution’s ranking position and make predictions about future performance. Whilst most rankers fully met expectations around publishing their methods and indicators, in only one case (ARWU) was it thought to be possible for a third-party with access to the data to be able to replicate the results.
Questions around data availability required rankers to describe both their sources and their parameters in detail, with a specific question regarding the ability to correct data. Again, all rankers fully or partially met these criteria.
In addition to data being fully described, it was felt to be important that this was also openly available for the community to scrutinise and work with. Only ARWU received full marks on this, with other rankings making some data available.
As with the declaration of conflicts of interest, the community were keen that ranking agencies were financially transparent, revealing sources of income. Only U-Multirank fully met this criterion, with four out of the remaining five failing to meet it.
The five expectations of rankers here were that they drove good behaviour, measured against mission, measured one thing at a time (no composite indicators), tailored results to different audiences and gave no unfair advantage to universities with particular characteristics. The full criteria and indicators are listed in
Criteria and indicators - Measure what matters.
C | CRITERIA: MEASURE WHAT MATTERS |
---|---|
C1 | Drive good behaviour. Seeks to enhance the role of universities in society by measuring what matters, driving positive systemic effects and proactively seeking to limit any negative impacts such as over-reliance on rankings for decision-making. |
C1.1 | Does the ranking provide clear warnings on their website about the limitations of using rankings for decision-making? |
C1.2 | Does the ranking run any promotional campaigns around the limitations of rankings? |
C1.3 | Does the ranking measure a university’s approach to equality, diversity, sustainability, open access or other society-focussed agendas? |
C2 | Measure against mission. Accepts that different universities have different characteristics - mission, age, size, wealth, subject mix, geographies, etc, and makes visible these differences, so that universities can be clustered and compared fairly. |
C2.1 | Does the ranker avoid offering one single over-arching ranking that claims to assess the ‘top’ universities? |
C2.2 | Does the ranking provide a facility by which institutions can be compared to others that share their mission? |
C2.3 | Does the ranking provide a facility by which institutions can be compared to others within the same subject area? |
C2.4 | Does the ranking provide contextual, qualitative and quantitative information on each of the institutions they rank? |
C3 | One thing at a time. Does not combine indicators to create a composite metric thus masking what is actually being measured. |
C3.1 | Does the ranking AVOID combining indicators to create a composite metric? |
C4 | Tailored to different audiences. The ranking provides different windows onto the data that may be relevant to different audiences. For example, by providing an opportunity to focus in on teaching elements for students. |
C4.1 | Does the ranking offer the audience the opportunity to weight the indicators in accordance with their preferences? |
C4.2 | Does the ranking offer different ‘windows’ onto the same data for different audiences? |
C5 | No unfair advantage. Makes every effort to ensure the approach taken does not discriminate against organisations by size, disciplinary mix, language, wealth, age and geography. |
C5.1 | Does the ranking only use data sources that offer equal global representation? |
C5.2 | Does the ranking only use indicators that offer institutions of all sizes, equal opportunity to succeed? |
C5.3 | Does the ranking only use indicators that offer institutions of all disciplinary mixes, equal opportunity to succeed? |
C5.4 | Does the ranking only use indicators that offer institutions where English is not the primary language, equal opportunity to succeed? |
With widely acknowledged limitations of university rankings, the community felt it was important that ranking agencies themselves did their best to highlight this on their products. CWTS Leiden and U-Multirank clearly did so; ARWU and US News did not, and QS and THE made some reference to it which was felt to be undermined by their repeated reference to their rankings being ‘trusted’ or ‘excellent’ sources.
Whilst universities largely seek to offer teaching and research in some form, their missions and other characteristics such as size and wealth, are hugely varied. The community felt it was important that rankers provided a facility by which institutions could be compared to others with similar characteristics rather than grouping all together on a single scale. Only U-Multirank and CWTS Leiden avoided offering one single over-arching ranking that sought to identify the ‘top’ universities, with only U-Multirank providing a facility by which rankers could be compared with those sharing their mission. All provided subject-based comparisons and most provided some qualitative data on the organisations being ranked.
A related criterion to that specifying that rankers should avoid using a single scale of excellence, was that they avoided composite metrics that used pre-set weightings regardless as to whether institutions weighted their focus on the same way. Again, only CWTS Leiden and U-Multirank avoided composite metrics, thus achieving full marks.
Recognising that rankings are used by different audiences for different purposes, the community felt it important that the ranking data collected was delivered in different formats according to the interests of these different audiences. No ranking fully met expectations here (although through the avoidance of composite indicators, this was not thought to be an applicable question for CWTS Leiden). Most others scored poorly.
Whilst living in an ‘unfair’ world, it was still felt to be important to avoid offering an unfair advantage to universities with particular characteristics (size, discipline, geography and English language-use) as far as possible. While it was felt that all rankings made some effort in this space, none scored full marks.
The five expectations of rankers in this section were around rigorous methods, no ‘sloppy’ surveys, validity, sensitivity and honesty about uncertainty. The full criteria and indicators are listed in
Criteria and indicators - Rigour.
D | CRITERIA: RIGOUR |
---|---|
D1 | Rigorous methods. Data collection and analysis methods should pass tests of scientific rigour, including sample size, representation, normalisation, handling of outliers, etc. |
D1.1 | Does the ranking transparently normalise indicators, in a robust way, for field? |
D1.2 | Does the ranking clearly state how it handles outliers, and is this fair? |
D2 | No sloppy surveys. Limit use of unverifiable survey information and ensures that where they are used that the methods are sound and unbiased, e.g. samples are large, representative and randomly selected; questions are reliability-tested and measure what they seek to measure. |
D2.1 | Does the ranking AVOID use opinion surveys to elicit reputational data? |
D2.2 | Where a ranking uses surveys are large samples used? |
D2.3 | Where a ranking uses surveys are random samples used? |
D2.4 | Where a ranking uses surveys are representative samples used? |
D2.5 | Where a ranking uses surveys are questions reliability tested? |
D2.6 | Where a ranking uses surveys are the questions valid? |
D3 | Validity. Indicators have a clear relationship with the characteristic they claim to measure. For example, teaching quality should not solely be indicated by staff-student ratios. |
D3.1 | Do indicators have a clear relationship with the characteristic they claim to measure? |
D4 | Sensitivity. Indicators are sensitive to the nature of the characteristic they claim to measure. |
D4.1 | Does the ranking AVOID include monotonic indicators for which a good value will depend on the mission of the university, e.g., staff-student ratio; international-non-international staff ratio. |
D4.2 | Are ranking results relatively stable over time? E.g., are improvements in rank likely to reflect true improvements in University performance? |
D5 | Honest about uncertainty. The types of uncertainty inherent in the methodologies used, and of the data being presented should be described, and where possible, clearly indicated using error bars, confidence intervals or other techniques, without giving a false sense of precision. |
D5.1 | Does the ranking website provide any commentary on the limitations and uncertainties inherent within their methodologies? |
D5.2 | Does the ranking provide error bars or confidence intervals around the indicators provided? |
A common complaint regarding the use of rankings by scientific organisations is that they use methods that those organisations would not consider valid in their own practices. Two questions around field normalisation and the handling of outliers yielded mixed results with some efforts around both but very few exemplars of best practice.
While the community were not against survey methodologies per se, there was a strong sense that the rankers use of surveys was problematic with questionable practices employed around samples and question choice. ARWU and CWTS Leiden avoided using surveys altogether thus achieving full marks on this criterion. Whilst sample sizes tended to be large, they were rarely random and not always thought to be representative. There was no evidence of reliability testing on questions, nor that the questions were entirely valid.
In any evaluation approach it is important that the indicators used are a valid enough proxy for the quality being measured. Only CWTS Leiden and U-Multirank were thought to fully meet this requirement, with the other rankers making some efforts but falling short of expectations.
Gingras (
In any evaluative data there are always going to be levels of uncertainty around the confidence in which the results can be relied on. The community were keen that confidence levels made visible the relatively small differences between those organisations at different ranking positions. Again, only CWTS Leiden clearly expressed the limitations around their methodologies and provided stability indicators for their rankings. Others made some efforts with regards to the former but failed to score on the latter.
Spidergram illustrating the actual scores/total possible score for each global ranking.
Spidergram illustrating THE WUR scores on all twenty criteria.
Spidergram illustrating U-Multirank scores on all twenty criteria.
Spidergram illustrating QS scores on all twenty criteria.
Spidergram illustrating ARWU scores on all nineteen applicable criteria.
Spidergram illustrating US News scores on all nineteen applicable criteria.
Spidergram illustrating CWTS Leiden scores on all 18 applicable criteria.
Whilst the rankings that score better on these indicators may feel pleased with their performance, it is important to note that the community expectations are set at 100% adherence to the criteria. The closest any of these ranking agencies came to that was CWTS Leiden which scored 100% on nine of the eighteen criteria deemed to be applicable to them. Indeed, when you look at the average scores for all six rankers across the four criteria (
Average scores of all six ranking agencies on the four criteria.
The process of piloting the ranker rating tool surfaced many helpful learning points, including feedback from one the ranking agencies under assessment, CWTS Leiden, which it would be useful to incorporate into any future iteration. These are outlined below.
By putting out a ‘straw person’ list of draft criteria for fair and responsible university rankings and inviting free-form feedback, useful input was received. However, all the criteria were given equal weight in the resulting assessment tool. This may not reflect community expectations, with some criteria being of paramount importance and other criteria holding less importance. Future iterations of the assessment tool may wish to revisit both the chosen criteria and the resulting weightings via some kind of survey instrument. A survey may have the benefit of reaching a wider audience through the relative ease of completion and may enable a more nuanced assessment of ranking agencies.
The selection of ranking agencies and the focus on their flagship ranking for this pilot was a pragmatic choice due to time constraints and the need to recruit reviewers. However, to provide a more complete assessment of a much wider range of the increasing number of global rankings, in an ideal world this would be extended both to additional rankers and additional rankings (e.g., subject rankings).
Due to the impact of COVID-19 on workloads and the resulting availability of expert reviewers, some rankers only received one expert review and one senior expert reviewer calibration in this exercise. In an ideal world each ranking would receive a minimum of two expert reviews plus calibration. Even better, in line with the principle of evaluating with the evaluated, each ranking agency would submit a self-assessment to fill in any gaps not publicly available, or not known to the expert reviewers. If such assessments grow in popularity and visibility, it may be that more ranking agencies become willing to provide a self-assessment to make the case for their activities.
It was noted by reviewers that multi-part questions such as D1.2 “Does the ranking clearly state how it handles outliers and is this fair?” were difficult to assess. In future, such questions should be split into two. The other over-arching recommendation is that a more granular scoring system, perhaps across a five-point scale, would allow for fairer assessment. In the current exercise the use of ‘partially meets’ covered a whole range of engagement with the stated criterion, from slightly short of perfection to a little better than fail.
There were also some issues with particular questions as outlined below.
A1 Engage with the ranked.
B1 Transparent aims.
B2 Transparent methods.
B3 Transparent data availability.
B4 Open data.
C5 No unfair advantage.
D1 Rigorous methods.
Other questions that might be useful to include would related to the user-friendliness of the ranking web page, perhaps under C4 ‘Tailored for different audiences’, and the number of universities included in the ranking, perhaps under C5 ‘No unfair advantage’.
Global ranking agencies have a significant influence on the strategic and operational activities of universities worldwide, and yet they are unappointed and unaccountable. As a research management community we believe that there is a strong argument for providing an open and transparent assessment of the relative strengths and weaknesses of the global university rankings to make them more accountable to the higher education communities being assessed. We believe that the approach described in this report, as refined, offers a fair and transparent tool for running such assessments.
The findings of this exercise highlight that those rankings that are closest to the universities being assessed, the CWTS Leiden Ranking run by a university research group, and U-Multirank run by a consortium of European Universities, tended to better meet the community’s expectations of fairness and responsibility. Unfortunately, those rankings that are more highly relied upon by decision-makers, such as the Times Higher Education World University Ranking and the US News and World Report ranking, tended to score less well. However, all rankings fell short in some way and this work highlights where they might focus their attention.
One of the challenges of short-term project-based work such as this is long-term sustainability and influence, options for which are now being explored by the INORMS REWG. As well as drawing this work to the attention of ranking agencies, it needs to also reach those relying on rankings data for decision-making. This is also one of the next steps for the group. Overall, this work has been warmly welcomed by the HE community, and by some of the rankers assessed. We hope that the next iteration of this tool, revised in line with our recommendations, will play a formative role in improving the design of university rankings and limiting their unhelpful impacts on the HE community.
The additional file for this article can be found as follows:
Qualitative and quantitative ranker ratings.
The authors should like to acknowledge the ranker ratings provided by the INORMS Research Evaluation Working Group: Laura Beaupre (University of Guelph), Lone Bredahl Jensen (Southern Denmark University), Laura Himanen (Tampere University), Aline Rodrigues (Sociedade Beneficente Israelita Brasileira Hospital Albert Einstein), Hirofumi Seike (Tohoku University), Justin Shearer (University of Melbourne), Tanja Strom (Oslo Metropolitan University) Baron Wolf (University of Kentucky), and Baldvin Zarioh (University of Iceland), and the Expert Reviewers Group: Stephen Curry (Imperial College, London), Markku Javanainen (University of Helsinki), Kristján Kristjánsson (Rekyavik University), Jacques Marcovitch (Universidade de São Paulo (USP)), Maura McGinn (University College Dublin), Cameron Neylon (Curtin University), Nils Pharo (Oslo Metropolitan University), and Claus Rosendal (Southern Denmark University).
The authors have no competing interests to declare.