Dr. Domenico Giusti
Paläoanthropologie, Senckenberg Centre for Human Evolution and Palaeoenvironment
The act of publishing is a quid pro quo in which authors receive credit and acknowledgment in exchange for disclosure of their scientific findings.
Knowledge is built upon those findings and thus it is necessary to have in place a system to evaluate both research and researchers.
Both research and researcher are evaluated through two primary methods: peer review and metrics, the first qualitative and the latter quantitative.
Peer review is used primarily to judge pieces of research. It is the formal quality assurance mechanism whereby scholarly manuscripts (e.g., journal articles, books, grant applications and conference papers) are made subject to the scrutiny of others, whose feedback and judgements are then used to improve works and make final decisions regarding selection (for publication, grant allocation or speaking time).
Once they have passed peer review, research publications are then often the primary measure of a researcher's work ('Publish or Perish'). [...] However, assessing the quality of publications is difficult and subjective. [...] General assessment is often based on metrics such as the number of citations publications garner (h-index), or even the perceived level of prestige of the journal it was published in (quantified by the Journal Impact Factor).
The peer review system judges the validity, significance and originality of the work, rather than who has done it.
VoYS. Peer review. The nuts and bolts. A guide for early career researchers
VoYS. Peer review. The nuts and bolts. A guide for early career researchers
The reviewers know who the authors are, but the authors do not know who the reviewers are. The most common system in science disciplines.
The reviewers do not know who the authors are, and the authors do not know who the reviewers are. Main form of peer review used in the humanities and social sciences.
At its most basic (Open identities), reviewers know who the authors are and the authors know who the reviewers are. It can also mean inclusion of the reviewers’ names and/or reports alongside the published paper (Open reports), comments from others at pre-publication stage, or various combinations of these.
An umbrella term for a number of overlapping ways that peer review models can be adapted in line with the aims of Open Science. Ross-Hellauer 2017
Main traits (combined, or not):
An umbrella term for a number of overlapping ways that peer review models can be adapted in line with the aims of Open Science. Ross-Hellauer 2017
Optional traits:
Being a peer reviewer presents researchers with opportunities for engaging with novel research, building academic networks and expertise, and refining their own writing skills. It is a crucial element of quality control for academic work. Yet, in general, researchers do not often receive formal training in how to do peer review. Even where researchers believe themselves confident with traditional peer review, however, the many forms of open peer review present new challenges and opportunities.
Open peer review hence aims to bring greater transparency, accountability, inclusivity and/or efficiency to the restricted model of traditional peer review.
Given these issues, potential reviewers may be more likely to decline to review.
Metrics have proliferated: usually well intentioned, not always well informed, often ill applied. Hicks et al 2015. Bibliometrics: The Leiden Manifesto for research metrics
Metrics related to social usage and online comment:
h-index
Some recruiters request h-index values for candidates. Several universities base promotion decisions on threshold h-index values and on the number of articles in 'high-impact' journals. Researchers' CVs have become opportunities to boast about these scores, notably in biomedicine. Everywhere, supervisors ask PhD students to publish in high-impact journals and acquire external funding before they are ready. Hicks et al 2015. Bibliometrics: The Leiden Manifesto for research metrics
Impact factor
The Journal Impact Factor is frequently used as the primary parameter with which to compare the scientific output of individuals and institutions. The Journal Impact Factor, as calculated by Thomson Reuters, was originally created as a tool to help librarians identify journals to purchase, not as a measure of the scientific quality of research in an article. With that in mind, it is critical to understand that the Journal Impact Factor has a number of well-documented deficiencies as a tool for research assessment. DORA
Across the world, universities have become obsessed with their position in global rankings (such as the Shanghai Ranking and Times Higher Education's list)
A myriad of metrics is available, compounded by many similar versions of the same metric.
The commercial systems and tools available have not effectively addressed all the needs of a university.
Difficult to know which metric will give the most useful insights, whether a metric is being calculated appropriately, and whether other institutions are looking at things in the same way.
The metrics used to evaluate research (e.g. Journal Impact Factor, h-index) do not measure - and therefore do not reward - open research practices. Open peer review activity is not necessarily recognized as "scholarship". Furthermore, many evaluation metrics - especially certain types of bibliometrics - are not as open and transparent as the community would like.
10 principles for the measurement of research performance, against the pervasive misapplication of indicators to the evaluation of scientific performance.
The Leiden Manifesto for Research Metrics from Diana Hicks on Vimeo.
DORA recognizes the need to improve the ways in which the outputs of scholarly research are evaluated. The declaration was developed in 2012 during the Annual Meeting of the American Society for Cell Biology in San Francisco. It has become a worldwide initiative covering all scholarly disciplines and all key stakeholders including funders, publishers, professional societies, institutions, and researchers. We encourage all individuals and organizations who are interested in developing and promoting best practice in the assessment of scholarly research to sign DORA. DORA
The Journal Impact Factor has a number of well-documented deficiencies as a tool for research assessment:
Objectives:
A single research output may live online in multiple websites and can be talked about across dozens of different platforms.
Altmetrics are metrics and qualitative data that are complementary to traditional, citation-based metrics. They can include (but are not limited to) peer reviews on Faculty of 1000, citations on Wikipedia and in public policy documents, discussions on research blogs, mainstream media coverage, bookmarks on reference managers like Mendeley, and mentions on social networks such as Twitter.
Other Altmetrics services: Paperbuzz Impactstory Dimensions.ai PlumX Metrics Snowball Metrics
A recent report from the European Commission (2017) recognizes that there are basically two approaches to Open Science implementation and the way rewards and evaluation can support that:
Simply support the status quo by encouraging more openness, building related metrics and quantifying outputs;
Experiment with alternative research practices and assessment, open data, citizen science and open education.
More and more funders and institutions are taking steps in these directions.
Other steps funders are taking: allowing more types of research output (such as preprints) in applications and funding different types of research (such as replication studies).
Curry 2019 [Online access, 16 Jul '21]
Is research evaluation fair?
Research evaluation is as fair as its methods and evaluation techniques. Metrics and altmetrics try to measure research quality with research output quantity, which can be accurate, but does not have to be.
Science is a method whereby a notion proffered by anyone must be supported by experimental data. This means that if somebody else is interested in checking up on the notion presented, that person must be allowed access to instructions as to how the original experiments were done. Then he can check things out for himself. It is not allowable in science to make a statement of fact based solely on your own opinion.
Claims made by scientists, in contrast to those made by movie critics or theologians, can be separated from the scientists who make them. It isn’t important to know who Issac Newton was. He discovered that force is equal to mass times acceleration. He was an antisocial, crazy person who wanted to burn down his parents’ house. But force is still equal to mass times acceleration. It can be demonstrated by anybody with a pool table and familiar with Newton's concepts. K. Mullis, 1998, Dancing naked in the mind field
However, social biases seem to be well rooted in such a 'prestige economy'.