If On the other hand, evaluative informetrics itself, defined as the study of evaluative aspects of science and scholarship using citation analysis, altmetrics and other indicators, does However, informetric indicators are often used in research assessment processes. To obtain a better understanding of their role, the links between evaluative informetrics and ‘values’ are investigated, and a series of practical guidelines are proposed. Informetricians should maintain in their evaluative informetric studies a neutral position toward the policy issues addressed and the criteria specified in an evaluative framework. As professional experts, informetricians’ competence lies Informetric researchers could propose that evaluators and policy makers incorporate fundamental scientific values such as openness and adopting a critical attitude in assessment processes. Informetricians could also promote and participate in an overall discussion within the academic community and the research policy domain about the objectives and criteria in research assessment processes and the role of informetric tools therein.
Evaluative informetrics is defined as the study of evaluative aspects of science and scholarship using informetric data and methodologies, such as citation analysis and altmetrics. Following the main lines of an article by the Dutch philosopher O.D. Duintjer, nine interfaces are distinguished between quantitative science studies, especially evaluative informetrics, and the domain of values, including scientific, socio-historical, political, ethical and personal norms and objectives. Special attention is given to the “principle of value neutrality” at the meta-level of methodological rules guiding scientific inquiry and to the crucial, independent role of evaluative frameworks in research evaluation. The implications of the various relationships between science and values for research practices in evaluative informetrics and for its application in research assessment are considered.
The key notion that the author wishes to put forward is that of a “principle of value neutrality”, positioned at the
An article published by O.D. Duintjer provides a framework not merely to highlight the practical consequences of the methodological principle of value neutrality but also to illustrate
The current author is not equipped to put forward a comprehensive overview of the theoretical-philosophical debate that has been going on already for so many years on the distinction between ‘facts’ and ‘values’. Readers who expect a thorough foundation for the philosophical assumptions underlying this essay will be disappointed. In addition, the current paper does
The current article is more of a theoretical-philosophical than an empirical nature. What it does offer is a series of theoretical distinctions that aim to help informetricians, evaluators using informetric indicators and researchers subjected to assessment to obtain insight into the potential and limits of informetrics. These distinctions discern between issues that can be solved informetrically and those relating to evaluative or political premises that play a key role in the application of informetric indicators but, relevant as they may be, on which informetrics has no jurisdiction.
Recent discussions in the science policy domain on responsible research and responsible metrics focus on guidelines for evaluation in general and for the use of informetric indicators in particular (e.g.,
Citation analysis and related tools from evaluative informetrics and its applications in research assessment deal in many ways with values. As a
Adopting a theoretical-philosophical approach, this contribution gives an overview of the various ways in which evaluative informetrics and ‘values’ are related. Moreover, it discusses the implications of this overview for the role of evaluative informetrics in its research practices and in research assessment.
The term
The following example may clarify the difference between citation impact indicators as a
Following the main lines of an article by the Dutch philosopher O.D. Duintjer, the current contribution distinguishes nine interfaces between modern science and values (
Special attention is given to a “principle of value neutrality”, positioned at the meta-level of methodological rules guiding scientific inquiry. The Sociology Group (
Weber’s concept was further developed by the German philosopher Hans Albert, who, building upon ideas by Karl Popper (
Nine interfaces between modern science and values (
Interfaces between science and values | Further explanation and typical examples from evaluative informetrics chosen by the current author | |
---|---|---|
1 | Values as the object of scientific research | Robert K. Merton’s study of the norms of science. Empirical investigation of how scientists perceive research quality. |
2 | Values in the language of scientific research about the object | Personal judgments can be mistakenly conceived as representing knowledge of the object validated in empirical research. Use of the term ‘closed’ as opposite to ‘open’ access to scientific journals seems suggestive. |
3 | Values that lie at the basis of scientific research | Value neutrality as a methodological principle: scientific statements should be free of any value judgment. Evaluation criteria and policy objectives cannot be founded informetrically. |
4 | Values behind selective viewpoints and core concepts of scientific theories | Extra-scientific values behind selective viewpoints should be made explicit. The assumption that generating impact is |
5 | Values when setting up investigations, the results of which are foreseeably relevant to certain societal powers | Institutions may commission informetric studies with the aim of solving an internal conflict or enhancing their international status. Critical informetric studies may aim at questioning funding policies. |
6 | Values as a limit on what is permissible in scientific experiments with laboratory animals and test subjects | The publication of outcomes of ‘experimental’ bibliometric studies may harm the prestige of the study subjects, especially individual researchers and departments. |
7 | Modern science produces, by virtue of its structure, results that necessitate an overall discussion about the ends pursued in society | Within the academic community, an overall discussion is needed about the objectives and criteria in research assessment processes and the role of informetric indicators therein. |
8 | Extension of elements from the ethos of science to moral values outside of science | Extending the principles of openness and adopting a critical attitude toward the formation of an evaluative judgment in a research assessment process. |
9 | The value area of modern science as part of a wider lifeworld | In research assessment, distinct value domains come together, both from science, social science and humanities and from extra-scientific areas, such as politics. |
The interfaces presented in the next section are interpreted in terms of the value configuration underlying the field of quantitative science studies, especially evaluative informetrics. In this way, the current contribution not only explains Duintjer’s interfaces between modern science and values but also aims to enlighten the role of values in evaluative informetrics. The final section of this contribution discusses some of the consequences of the principles outlined by and distinctions made by Duintjer for research in evaluative informetrics and its application in research assessment.
A well-known example of the study of values is the work by Robert K. Merton on social norms in scientific-scholarly research communities, conceived by him as ideals that are dictated by the goals and methods of science and are binding on scientists (
A typical example from the field of quantitative science studies is empirical research into how researchers perceive “research quality”. This type of research does not aim at drawing conclusions as to whether such perceptions are valid or not, but at examining
In scientific-scholarly research practice, at the level of theoretical-empirical statements about objects, value judgments may enter, reflecting the valuation or appreciation of the analyst toward the object. Such personal judgments can be mistakenly conceived as representing knowledge of the object validated in the research itself. A typical example is the use of the term ‘closed access’ to indicate subscription-based access to scientific journals. Additionally, the use of the term ‘peripheral journal’—even if it has a well-defined meaning in network analysis—may give rise to a negative impression of a journal among non-experts.
The methodological requirement of value neutrality at the level of statements on objects based on empirical research dictates that such personal valuations of the object of research should be avoided. A researcher who wants to make them known to the reader should make clear that the qualifications are not based on the research itself, but are a priori assumptions of the investigator. The requirement of value neutrality also has the following implication: it is methodologically incorrect to neglect, on the basis of a value interest, relevant facts that fall within the intended field of research but contradict a preconceived conclusion and to one-sidedly emphasize facts that do support it.
This section is not about values as research object, nor about scientifically unfounded or hidden prescriptions and valuations by the analyst, but about the value basis on which science itself operates, including rules that standardize and direct scientific activity. These values can be explicated and discussed at a level higher than that on which scientific statements are made. It can be denoted as a
The principle of
The principle of value neutrality has strong implications for the position of an informetric analyst contributing to an assessment process toward the policy objectives underlying the process and the evaluation criteria applied in it. As stated by Moed, “A basic notion holds that from what is cannot be inferred what ought to be. Evaluation criteria and policy objectives are not informetrically demonstrable values. Of course, empirical informetric research may study quality perceptions, user satisfaction, the acceptability of policy objectives, or effects of particular policies, but they cannot provide a foundation of the validity of the quality criteria or the appropriateness of policy objectives. Informetricians should maintain in their informetric work a neutral position towards these values” (
The methodological rules stated at the meta-level constitute scientific practice and have an internal-scientific character. However, the methodological principle of value neutrality does not preclude that also
Duintjer underlines that each problem statement is guided by selective points of view that are incorporated in theoretical core concepts and that determine the selection of empirical data. Extra-scientific values often hide behind these selective viewpoints and core concepts. “The fact that scientific research is guided by selective points of view, which in turn are inspired by social values, is not formally the same as a normative setting of these values. In research, these values can be assigned a hypothetical status” (Duintjer, 1970, p. 29).
Against the dependence on extra-scientific values, he recommends taking the following precautions: “making the concealed or implicit value background of selective viewpoints explicit and recognition of its non-scientific character; insight in the limits of the empirical knowledge acquired within such points of view; receptivity to investigations adopting different or wider viewpoints” (Duintjer, 1970, p. 32).
In quantitative science studies and applied evaluative informetrics, the concept of research impact plays a key role. In many studies, a tacit assumption seems to be that generating impact is
A second example relates to the distinction between formative and summative evaluation. In summative evaluation, the focus is on the outcome of a particular activity. This outcome can, for instance, be used in a funding decision. Formative evaluation assesses the development in an activity at a particular time and focuses on improving its performance. Whether an assessment process should be formative or summative cannot be decided on the basis of informetric grounds.
In empirical research into the effects of the application of indicators upon researchers’ behavior, the analyst’s view on whether this application is valuable and whether or not citations indicate performance should
As Duintjer states, “The choice of research fields in science is not only related to conceptual points of view, but also to the extent to which the expected results of the research can foreseeably be of importance for certain social power groups, which are sometimes involved in conflicts and who will use these results in their favor and to the detriment of their opponents” (p. 32). Conversely, value decisions are also at stake when investigations are launched to unmask ideological facade decorations and hidden power relations and can therefore foreseeably contribute to public awareness or political “Aufklärung” (p. 33).
For instance, managers at a research institution or research funding organization may commission informetric researchers to conduct a study with the aim of using its outcomes to solve an internal
However, critical informetric studies can also question a particular situation or practice. For instance, the work of the Leiden team headed by Anthony van Raan in the early 1980s strongly underlined the critical role of bibliometric indicators and their potential to raise critical questions to academic department managers and funding organizations about their quality assessment and funding policies. In a study of research groups in their university, they obtained evidence of conservatism in funding policies, favoring groups with a long publication and funding track record, at the expense of emerging groups headed by young researchers exploring new approaches (
“Scientific explanations and predictions are often sought with the help of experiments, looking for behaviors of an isolated class of phenomena under self-determined conditions. But when organizing an experiment with laboratory animals or test subjects, one also has to deal with moral values that regulates also outside of science our relationship to animals and humans” (Duintjer, p. 34).
When developing and testing new assessment methodologies or indicators, their validity can be examined only if they are applied to “real” performing entities, such as individual researchers, groups or institutions. Therefore, application experiments or ‘try outs’ are organized, in which the outcomes of a method are discussed with the subjects of the assessment. However, if these outcomes become public—for instance, published in a journal article—or are shared with those who commissioned the experiment—e.g., a managing director of the commissioning organization—they may
The environment in which evaluative-informetric experiments take place is
Duintjer emphasizes a so-called “structural equivalence” of science and technology. All theoretical statements or explanations in modern science can be converted into technological statements that answer the question “what could be done?” “All pure theoretical research in modern science provides society with means to steer nature and man, without indicating in which direction one should steer” (p. 38). Duintjer advocates an “overall, democratic value discussion regarding the direction, goals and standards to which society directs itself and is directed” (p. 38). “Obviously, not merely scientists should decide on the direction of the whole of society, especially not because value judgments cannot be derived from theoretical-empirical knowledge itself” (p. 38).
As outlined in the introduction section, Moed (
Following Duintjer’s statements quoted above, one can maintain first of all that not merely informetricians should decide on the values, standards and political objectives underlying an assessment, especially not because value judgments cannot be derived from informetric research itself. What is more, one can argue that an overall discussion is needed within the academic community and the research policy domain about the objectives and criteria in research assessment processes at various levels of aggregation—e.g., individuals, institutions—and about appropriate conditions for the use of informetric indicators in these processes.
Duintjer introduces this interface between science and values as follows: “As already said, the theoretical-empirical knowledge of science does contain technological information about what we can do, but it does not yet answer the normative question of what we should do. The latter question was precisely the subject of the advocated value discussion (see previous section, HM). One may wonder, however, whether one can perhaps derive from the ethos that underlies scientific practice also elements that can be presented to society as moral and political values” (p. 39/40).
He proposes a possible extension not only of norms regulating the scientific attitude toward
Within the context of the current paper, the question emerges as to whether and how science-internal norms could provide guidance in science assessments, especially in the further theoretical and practical development of evaluative frameworks. Following Duintjer’s line of reasoning, one could, for instance, propose to extend the intra-scientific principles of openness and adopting a critical attitude, or the ‘Mertonian’ norm of disinterestedness toward the formation of an evaluative judgment in a research assessment process (
As outlined in Section 3, scientific statements and practices are regulated by specific values in the form of objectives and rules. According to Duintjer, these values define what could be termed as a particular access road into reality. The value field of modern science is not the only value sphere, but rather forms part of a comprehensive whole of values next to other value spheres, such as those of politics, art, philosophy of life and existential experience.
In research assessment, distinct value domains come together, making it a complex activity that is difficult to grasp. The term ‘research performance’ relates to an agglomerate of internal-scientific values, such as ‘methodological soundness’, or notions as complex as ‘contribution to scientific-scholarly knowledge in a discipline’ or, moving outside the boundaries of a particular discipline, ‘to the advancement of scientific-scholarly knowledge in general’. Reaching beyond the domain of science and scholarship, values in research assessment may refer to ‘improving the state of mankind’ or to technological, economical or societal merit.
The domain that Duintjer denoted as “modern science” and that is outlined in the introduction section does not embrace other forms of scholarship that he denotes as “hermeneutic”
This concluding section aims to draw conclusions from Duintjer’s framework and notions outlined above for the values and limits of the use of informetric methods in the evaluation of scientific-scholarly research. It proposes a series of practical guidelines that may guide practitioners in evaluative informetrics in
When informetric investigators empirically examine value
In their scientific statements, informetricians should avoid the use of suggestive or insinuating terms that evoke an impression or sentiment that is not supported by presented empirical evidence.
Empirical informetric research related to research quality almost inevitably must make certain assumptions related to political or evaluative context. The value-free principle requires that informetricians make the assumptions of their tools as explicit as they can to their colleagues and to users of their information.
Making evaluative assumptions of informetric methods explicit is a common responsibility in which researchers should keep each other focused. In a sense, this is a never-ending activity. Rather than ignoring this principle, informetricians should accept it and give it a permanent place in their work.
When outcomes of informetric assessments are made public a theoretical justification of the methodology should highlight both its potential and its limits.
In experimental informetric studies of the validity or usefulness of informetric tools, outcomes may damage the prestige of assessed subjects when they are made public. Research reports on such experiments should anonymize investigated individuals and departments.
An evaluative informetric analysis should be carried out within a well-articulated evaluative framework, i.e., a set of evaluation criteria to be applied that are in agreement with the policy issue and assessment objectives.
An important element in the definition of an evaluative framework is distinguishing categories of performance values, including extra-scientific merits as well, each with its own assessment methods and indicators. An evaluative framework should justify the choice of a particular combination of approaches.
Informetricians should make clear that the informetric approach to research assessment represents only one particular way of analyzing research performance, next to other approaches, and that none of these has a priori a preferred status.
Humanities and social sciences could contribute to the development of assessment methods and to the definition of an assessment’s evaluative framework, having an enlightenment function rather than a technological function.
Informetricians should maintain in their applied evaluative studies a neutral position toward the policy issues addressed and the criteria specified in an evaluative framework. They should refrain from advocating particular political or evaluative a prioris.
As professional experts, informetricians’ competence lies
Informetric researchers could propose that evaluators and policy makers incorporate fundamental scientific values such as openness and adopting a critical attitude in the set-up and implementation of research assessment processes.
Informetricians could promote and participate in an overall discussion within the academic community and the research policy domain about the objectives and criteria in research assessment processes and the role of informetric tools therein.
The notion of an evaluative framework has a practical dimension and a theoretical dimension. The first relates to the process during which research assessment takes place: how it is organized, which evaluation model is chosen and which rules and principles regulate the process. The theoretical dimension relates to the assessment objectives and to an articulation of what has to be evaluated, which criteria and yardsticks are to be applied and how these are justified. The current contribution focuses on this second dimension.
Duintjer states, “I start from the assumption that the distinction between facts and values is legitimate in the sense of being a logical distinction between types of language use. But this logical distinction should not be conceived as a metaphysical separation of two worlds. Facts are being searched and found in a context that comprises also value components, while on the other hand facts may give rise to the design of new values that can in turn be related to real possibilities” (p. 27).
“Hermeneutics is the theory and methodology of interpretation, especially the interpretation of biblical texts, wisdom literature, and philosophical texts” (
“Critical theory is the reflective assessment and critique of society and culture by applying knowledge from the social sciences and the humanities to reveal and challenge power structures” (
The author has no competing interests to declare.