Responsible metrics

There are various philosophical discussions about the merits of using citation scores and metrics as measures of research quality, and it is important to note different cultures of citation, authorship and publication rates in different disciplines.  There is a wide range of activity in the area of research metrics and impact measurement to ensure that measures are used responsibly, and that stakeholders experience a level playing field, notably: 

Three key themes in responsible metrics are:

  • The need to avoid and challenge use of the Journal Impact Factor (JIF) as a proxy measure of quality for individual research articles
  • The importance of only comparing like with like.  For example, do not compare researchers at different career stages, or in different disciplines.  The use of normalised indicators goes some way to addressing this issue.
  • Advocating the use of a range of impact measures in research assessment, and the importance of qualitative judgement, including peer-review. 

More themes in responsible metrics

  • Quantitative evaluation should support qualitative expert assessment (DORA 3, 15, Leiden 1, 7, Metric Tide 1, 17a)
  • The value and impact of all types of research outputs needs to be considered (DORA 3, Leiden 6)
  • A move towards a range of article based metrics is needed (DORA 7, 17, Metric Tide 4, 8)
  • The need for responsible authorship practices, relating to the specific contributions of authors (DORA 8)
  • Research performance should be measured against the research missions of the institution, group or researcher (Leiden 2, 3, Metric Tide 4)
  • Transparency of how metrics are generated, explained, and used by decision makers is needed (Leiden 4, 5, Metric Tide 6, 7, 9)
  • Normalised indicators are required (Leiden 6)
  • It should be noted that h-index varies by discipline and is database dependent (Leiden 7)
  • False precision should be avoided (Leiden 8, 9)
  • Responsible use of metrics is important from an equality and diversity perspective (Metric Tide 2, 3)
  • Unique identifiers such as ORCiD, ISNIs (for institutions) and DOIs should be advocated (Metric Tide 3,10, 11, 12, 13)

Responsible research assessment for researchers

DORA has four recommendations specifically for researchers:

15. When involved in committees making decisions about funding, hiring, tenure, or promotion, make assessments based on scientific content rather than publication metrics.
16. Wherever appropriate, cite primary literature in which observations are first reported rather than reviews in order to give credit where credit is due.
17. Use a range of article metrics and indicators on personal/supporting statements, as evidence of the impact of individual published articles and other research outputs.
18. Challenge research assessment practices that rely inappropriately on Journal Impact Factors and promote and teach best practice that focuses on the value and influence of specific research outputs.

Next generation metrics

The Global Research Council’s 2021 Conference Report “Responsible research assessment” acknowledged that “research assessment shapes research culture”, impacting on how researchers undertake and disseminate their research.  Ideally research assessment should encourage team science, transdisciplinary science, collaboration, open research, reproducibility or transparency, EDI and enable greater mobility between industry and academia.  Next generation metrics attempt to address these problems, and we outline some of the initiatives below.

      • Before a publication appears with author names attached, a huge amount of work goes into the associated research project.  Contributions will have been made by many people whose name won’t appear on the paper as an author.  Initiatives such as CRediT (Contributor Roles Taxonomy) are being explored, to give recognition to people in roles other than principal investigator and lead researchers. 
      • Following on from the Metric Tide (2015), Wilsdon et al. published the 2017 report Next-generation metrics: responsible metrics and evaluation for open science.  The report aims to guide further work by including 12 targeted recommendations along the themes of the European Open Science Agenda - developing research infrastructures, fostering and removing barriers to Open Research, and embedding Open Research in society.  To learn more about Open Research practices, please consult our Open Research page and Open Research Canvas course.
      • In 2020 CESAER, the association of European Science and Technology Universities, published a white paper on Next Generation Metrics in the light of greater openness in research, with the aim of moving away from a culture of competition- and reputation-based assessment towards a culture of quality, trust and risk-taking.
      • The Hong Kong Principles were devised to reward and recognise scholars for behaviour that contributes to trustworthy research.
      • UKRI’s Future Research Assessment Programme (FRAP) seeks to understand what a healthy, thriving research system looks like and how an assessment model can best form its foundation. It began reporting in 2023 on strands including: 
          • evaluation of REF 2021
          • understanding international research assessment practice
          • investigating possible evaluation models that encourage excellent research and impact, support a positive research culture, and reduce the administrative burden on the HE sector.

Further help

Libraries and Learning Resources offers online training on responsible metrics via our Influential Researcher Canvas Course

For one-to-one appointments and bespoke workshops, contact the Research Skills Team in Libraries and Learning Resources.

Colleges

Professional Services