How do decision makers in science use bibliometric indicators and how do they rely on the indicators? Could bibliometric indicators replace the decision makers’ judgments (partly or completely)? Bornmann and Marewski (2019) suggest that these and similar questions can be empirically answered by studying the evaluative use of bibliometrics within the heuristics research program conceptualized by Gigerenzer, Todd, and ABC Research Group (1999). This program can serve as a framework so that the evaluative usage can be conceptually understood, empirically studied, and effectively taught. In this short communication, main lines suggested by Bornmann and Marewski (2019) are summarized in a brief overview.
heuristicsbibliometricsheuristics research programfast-and-frugal heuristicsbibliometrics-based heuristics (BBH)Introduction
When decision makers in science (scientists or managers) assess units (e.g., research groups or institutions), complex solutions are generally preferred: decisions rely on complex and time-consuming peer review processes. Since scientific quality is seen as multi-dimensional phenomenon, complex ways are regarded as a gold standard for research evaluation processes. In a recent paper, Bornmann and Marewski (2019) argue that in certain evaluation situations bibliometrics-based heuristics (BBHs) might be good alternatives to complex assessment processes. BBHs are reasonable decision strategies which reduce the considered information in assessments to indicators based on publication and citation data. In this short communication, main lines proposed by Bornmann and Marewski (2019) are summarized in a brief overview.
Bibliometrics is a method for assessing research based on publications and their citations, which is increasingly used in various research evaluation processes (alongside and in combination with peer review, Moed, 2005, 2017). The popularity of the method can be traced back to the following advantages:
the data are available in comprehensive databases (e.g., Dimensions, Scopus, or Web of Science), and many scientists and decision makers in science have good access to the data;
the data can be used very flexibly for evaluative analyses, within single fields and for cross-field comparisons;
bibliometric indicators positively correlate with other indicators used for research evaluation (see, e.g., Gralka, Wohlrabe, & Bornmann, 2019);
the bibliometric method is rooted in the research process. In nearly every discipline, researchers publish their results and embed these results in the results of previous studies by using citations (see Merton, 1965).
In recent years, empirical studies have shown that bibliometrics come to similar results as reviewers in various research evaluation situations. Expensive and time-consuming peer review processes are used in many evaluations. One of these complex processes is the UK Research Excellence Framework (REF). In a recent study, Pride and Knoth (2018) investigated the correlation between bibliometrics (citations in Microsoft Academic) and peer review (grade point average ranking). The study is based on nearly 200,000 papers from 36 disciplines submitted to the REF 2014. The authors found that the citations of the papers are strongly correlated with peer review assessments at the level of institutions and disciplines. The authors also demonstrated that the citation data can be used to predict institutional rankings in certain disciplines that are based on peer review assessments. The high degree of prediction accuracy of bibliometrics found in this study questions the use of the expensive and time-consuming peer review processes.
The study by Pride and Knoth (2018) is not an isolated case: high correlations between bibliometrics and peer review have not only been reported for the REF (e.g., Harzing, 2017; Traag & Waltman, 2019), but also for other peer review processes (e.g., Auspurg, Diekmann, Hinz, & Näf, 2015; Diekmann, Näf, & Schubiger, 2012). The results of these studies indicate that a focus on bibliometrics might not be a limitation in decision making. Simple decision making strategies based on bibliometrics can be similarly accurate to complex strategies, such as peer review processes. Bornmann and Marewski (2019) recommend that assessment processes in science be empirically studied to reveal the situations in which this is probably the case and those in which it is not.
The investigation of the relation between bibliometrics and peer review is particularly interesting in informed peer review processes. These are evaluation processes in which decision makers are informed by certain indicators. How do the decision makers use these indicators and how do they rely on them? Could indicators replace the decision makers’ judgments (partly or completely)? Do decision makers simply count the number of papers in order to assess researchers? Do they take the Journal Impact Factor (JIF) of these papers - one of the most popular journal metrics - into account? Do they prefer to rely on the h index for their decisions? Bornmann and Marewski (2019) suggest that these and similar questions can be empirically answered by studying the use of bibliometrics within the heuristics research program.
Heuristics research program and bibliometrics-based heuristics (BBHs)
The conceptualization and study of BBHs go back to the heuristics research program introduced by Gigerenzer et al. (1999). Many decision makers apply fast-and-frugal heuristics (simple decision strategies or rules of thumb) in the process of making judgments (see, e.g., Astebro & Elhedhli, 2006). Research from the area of judgement and decision making reveals that decisions based on a small amount of information can come to similar or better results than decisions made using a large amount of information (Gigerenzer & Gaissmaier, 2011). Bornmann and Marewski (2019) assume that this might be also the case for decisions in research evaluation. The authors conceptualize the use of bibliometrics in research evaluation based on the heuristics research program and introduce the term BBH. Reasonable BBHs might be able to significantly reduce the effort and time involved in decision making in research evaluation.
Research on BBHs is not interested in whether certain bibliometric indicators are mathematically correct; rather, the purpose of this research is to reveal decision situations in which the application of bibliometrics leads to appropriate judgements. The heuristics research program is rooted in the bounded rationality approach by Simon (1956, 1990). The author argues that people prefer simple decision strategies when resources are scarce (e.g., cognitive capacity or time) and when many decisions are required within a short time. Through the use of these strategies, people are ecologically oriented in decision making. The unbounded rationality approach forms the contrast to the bounded rationality approach. It assumes that people focus on complete information. The approach is logically oriented: various criteria are applied to weight many pieces of information and combine them to reach an ‘optimal’ decision. The unbounded rationality approach assumes unlimited time, complete knowledge, and vast information-processing capacity. Since these assumptions are seldom fulfilled in typical research evaluation processes, the bounded rationality approach seems more realistic.
Although the application of fast-and-frugal heuristics in research evaluation has never been investigated, results from other fields are available. These results may suggest that fast-and-frugal heuristics are also relevant for the research evaluation field. The overviews of Marewski, Schooler, and Gigerenzer (2010) and Katsikopoulos (2011) show, for instance, that heuristics are able to contextualize and inform diagnosis in medicine.
Bornmann and Marewski (2019) argue that adapting the heuristics research program (Gigerenzer et al., 1999) to evaluative bibliometrics would be an important step towards the conceptualization of the evaluative use of bibliometric indicators. The introduction of bibliometrics-based social heuristics, as described in the next section, may serve as an example for such a conceptualization. The section demonstrates the reasons that can be given for the use of citations for evaluative purposes. Other examples for conceptualizations can be found in Bornmann and Marewski (2019), Bornmann (2019), and Bornmann (2020). For example, Bornmann and Marewski (2019) focus on one-reason decision making heuristics, which exclusively base decisions on just one ‘clever’ cue without considering any other information. This cue can be a certain bibliometric indicator such as the number of papers belonging to the 10% most frequently cited papers in their field and publication year.
Bibliometrics-based social heuristics: citations as wisdom of crowds
When classifying objects (e.g., papers) according to their ‘quality’, social information and thus social heuristics might come into play - and be useful from an ecological point of view. As Gigerenzer and Gaissmaier (2011) point out, “social heuristics prove particularly helpful in situations in which the actor has little knowledge” (p. 472). Why and when can social information be useful in science evaluation? Bornmann and Marx (2014) discuss the benefits of citation analysis in research evaluation based on what Galton (1907) calls the ‘wisdom of crowds’. He reports on the results “of a contest at an English livestock show where contestants were asked to guess the weight of a publicly displayed ox. After sorting the 787 entries by weight, Galton found that the median estimate of 1,207 pounds differed from the true weight of 1,198 pounds by less than 1%” (Herron, 2012, p. 2278). When many people make a judgement or an estimate, one can expect that it will be valid (see also Laland, 2001). Averaging the judgments of others has been proposed as social heuristics to exploit the ‘wisdom of crowds’ (Hertwig & Herzog, 2009).
It is the approach of citation analysis that it is based on the judgements of many others: a publication is frequently cited by the scientific community when it has turned out to be useful. The wisdom of crowds in bibliometrics usually leads to skewed citation distributions: only a few publications receive many citations and most of the publications only a few or no citations at all. Only a few publications seem to be extremely useful in various fields, research contexts, and time periods in promoting science and development (Bornmann, de Moya Anegon, & Leydesdorff, 2010). Most publications do not seem to be of significant importance for later research. In other words, social citation data point to an unequal distribution of usefulness and recognition of research. Based on the conceptualization of citations as bibliometrics-based social heuristic, it might be ecologically rational to rely on citation counts to single out very useful papers.
Goals for investigating bibliometrics-based heuristics
Following Raab and Gigerenzer (2015), Bornmann and Marewski (2019) define three goals for investigating BBHs within the heuristics research program:
The first goal is descriptive. It asks which BBHs decision makers use in research evaluation. The goal is connected to the ‘adaptive toolbox’, which contains “heuristics a decision maker uses to respond adaptively to different decision situations, each one appropriate for a given task” (Marewski et al., 2010, p. 73).
The second goal is prescriptive: when should decision makers use which heuristics? The question concerns the ecological rationality of decisions. The study of ecological rationality “asks the question, in which environment a given strategy is successful (with respect to a defined criterion such as accuracy or speed of decision), and where will it fail” (Marewski et al., 2010, p. 74). Empirical research on the second goal examines in which context a certain BBH performs well and in which not (but another BBH or a specific peer review process). This research focuses on the quality of BBH decisions.
The third objective addresses the study of heuristics. Which methods and study designs can be applied to investigate the use of BBHs in decision making?
Many studies, which have hitherto empirically investigated bibliometrics, focus on certain flawed bibliometric indicators (e.g., the h index). Authors of these studies have proposed to replace flawed indicators by more suitable indicators. For example, at least 50 variants of the h index have been introduced since 2005 (Gasparyan et al., 2018). Only a few studies in scientometrics have investigated the use of bibliometrics in specific research evaluation situations (e.g., Hammarfelt, Rushforth, & de Rijcke, 2020; Moed, Burger, Frankfort, & van Raan, 1985; van den Besselaar & Sandström, 2020). The study by van den Besselaar and Sandström (2020) also deals with ways of handling contradicting bibliometric information (e.g., based on different literature databases), and decisions why and when to deviate from the bibliometric information.
All these studies on (the use of) bibliometrics are necessary, but have not been undertaken in the context of a coordinated research program. Bibliometric research lacks a common program that could function as a framework for empirical research. The heuristics research program can serve as such a framework, so that the usage of bibliometrics can be conceptually understood, empirically studied, and effectively taught (Bornmann & Marewski, 2019).
Discussion
If one assumes that researchers apply bibliometric shortcuts (BBHs) to reach decisions in research evaluation, one should investigate (1) how researchers use these shortcuts (BBHs) and (2) how successful they are in various evaluative situations. According to Mousavi and Gigerenzer (2017), “heuristics per se are neither good nor bad. They can be evaluated only with respect to the environment in which they are used. The more functional the match is between a heuristic and the environment, the higher the degree of ecological rationality of the heuristic. The functionality of this match is verifiable by the extent of its success rather than fulfillment of coherence requirements” (p. 367). Since evaluative processes and the use of bibliometric indicators in these processes change continuously, evaluation practices should be continuously empirically studied. For the appropriate evaluation of heuristics, Marewski et al. (2010) recommend that these principles are followed: (i) specify precise formal heuristics models that can be tested; (ii) test heuristics comparatively to identify better heuristics; (iii) test heuristics in new situations, and (iv) test heuristics in the real life settings.
Competing Interests
The author has no competing interests to declare.
Astebro, T., & Elhedhli, S. (2006). The effectiveness of simple decision heuristics: Forecasting commercial success for early-stage ventures. , 52(3), 395-409. DOI: 10.1287/mnsc.1050.0468Auspurg, K., Diekmann, A., Hinz, T., & Näf, M. (2015). The research rating of the German Council of Science and Humanities: Revisiting reviewers’ scores of sociological research units. , 66(2), 177-191. DOI: 10.5771/0038-6073-2015-2-177Bornmann, L. (2019). Bibliometrics-based decision tree (BBDT) for deciding whether two universities in the Leiden ranking differ substantially in their performance. , 122(2), 1255-1258. DOI: 10.1007/s11192-019-03319-1Bornmann, L. (2020). Bibliometrics-based decision trees (BBDTs) based on bibliometrics-based heuristics (BBHs): Visualized guidelines for the use of bibliometrics in research evaluation. , 1(1), 171-182. DOI: 10.1162/qss_a_00012Bornmann, L., de Moya Anegon, F., & Leydesdorff, L. (2010). Do scientific advancements lean on the shoulders of giants? A bibliometric investigation of the Ortega hypothesis. , 5(10). DOI: 10.1371/journal.pone.0013327Bornmann, L., & Marewski, J. N. (2019). Heuristics as conceptual lens for understanding and studying the usage of bibliometrics in research evaluation. , 120(2), 419-459. DOI: 10.1007/s11192-019-03018-xBornmann, L., & Marx, W. (2014). The wisdom of citing scientists. , 65(6), 1288-1292. DOI: 10.1002/asi.23100Diekmann, A., Näf, M., & Schubiger, M. (2012). The impact of (Thyssen)-awarded articles in the scientific community. , 64(3), 563-581. DOI: 10.1007/s11577-012-0175-4Galton, F. (1907). Vox populi. , 75, 450-451. DOI: 10.1038/075450a0Gasparyan, A. Y., Yessirkepov, M., Duisenova, A., Trukhachev, V. I., Kostyukova, E. I., & Kitas, G. D. (2018). Researcher and author impact metrics: Variety, value, and context. , 33(18). DOI: 10.3346/jkms.2018.33.e139Gigerenzer, G., & Gaissmaier, W. (2011). Heuristic decision making. , 62, 451-482. DOI: 10.1146/annurev-psych-120709-145346Gigerenzer, G., Todd, P. M., & ABC Research Group. (1999). . Oxford, UK: Oxford University Press.Gralka, S., Wohlrabe, K., & Bornmann, L. (2019). How to measure research efficiency in higher education? Research grants vs. publication output. , 41(3), 322-341. DOI: 10.1080/1360080X.2019.1588492Hammarfelt, B., Rushforth, A., & de Rijcke, S. (2020). Temporality in academic evaluation: ‚Trajectoral thinking‛ in the assessment of biomedical researchers. , 7, 33. DOI: 10.3384/VS.2001-5992.2020.7.1.33Harzing, A.-W. (2017). Running the REF on a rainy sunday afternoon: Do metrics match peer review? Retrieved August 5, 2018 from https://harzing.com/publications/white-papers/running-the-ref-on-a-rainy-sunday-afternoon-do-metrics-match-peer-review; https://openaccess.leidenuniv.nl/handle/1887/65202Herron, D. M. (2012). Is expert peer review obsolete? A model suggests that post-publication reader review may exceed the accuracy of traditional peer review. , 26(8), 2275-2280. DOI: 10.1007/s00464-012-2171-1Hertwig, R., & Herzog, S. M. (2009). Fast and frugal heuristics: Tools of social rationality. , 27(5), 661-698. DOI: 10.1521/soco.2009.27.5.661Katsikopoulos, K. V. (2011). Psychological heuristics for making inferences: Definition, performance, and the emerging theory and practice. , 8(1), 10-29. DOI: 10.1287/deca.1100.0191Laland, K. N. (2001). Imitation, social learning, and preparedness as mechanisms of bounded rationality. In G.Gigerenzer & R.Selten (Eds.), (pp. 233-247). Cambridge, MA, USA: MIT Press.Marewski, J. N., Schooler, L. J., & Gigerenzer, G. (2010). Five principles for studying people’s use of heuristics. , 42(1), 72-87. DOI: 10.3724/SP.J.1041.2010.00072Merton, R. K. (1965). . New York, NY, USA: Free Press.Moed, H. F. (2005). . Dordrecht, The Netherlands: Springer.Moed, H. F. (2017). . Heidelberg, Germany: Springer. DOI: 10.1007/978-3-319-60522-7Moed, H. F., Burger, W. J. M., Frankfort, J. G., & van Raan, A. F. J. (1985). The use of bibliometric data for the measurement of university research performance. , 14(3), 131-149. DOI: 10.1016/0048-7333(85)90012-5Mousavi, S., & Gigerenzer, G. (2017). Heuristics are tools for uncertainty. , 34(4), 361-379. DOI: 10.1007/s41412-017-0058-zPride, D., & Knoth, P. (2018). Peer review and citation data in predicting university rankings: A large-scale analysis. In E.Méndez, F.Crestani, C.Ribeiro, G.David & J.Lopes (Eds.), . (pp. 195-207). Cham, Switzerland: Springer. DOI: 10.1007/978-3-030-00066-0_17Raab, M., & Gigerenzer, G. (2015). The power of simplicity: A fast-and-frugal heuristics approach to performance science. , 6. DOI: 10.3389/fpsyg.2015.01672Simon, H. A. (1956). Rational choice and the structure of the environment. , 63(2), 129-138. DOI: 10.1037/h0042769Simon, H. A. (1990). Invariants of human-behavior. , 41, 1-19. DOI: 10.1146/annurev.ps.41.020190.000245Traag, V. A., & Waltman, L. (2019). Systematic analysis of agreement between metrics and peer review in the UK REF. , 5(1). DOI: 10.1057/s41599-019-0233-xvan den Besselaar, P., & Sandström, U. (2020). Bibliometrically disciplined peer review: On using indicators in research evaluation. , 2(1). DOI: 10.29024/sar.16