Research Indicators: an introduction

Research indicators aim to quantify and monitor the importance of published research. Research indicators can be divided into the following:

Citation metrics analyse the number of times other researchers refer to (cite) a given publication. They can be a useful measure of the level of attention within scholarly publishing. Citation metrics can be generated on an article, author or publication level.

Alternative metrics ("altmetrics") look at the level of attention received in social media and other platforms, offering useful information about impact outside of scholarly publishing, and also serving as early indicators of possible intentions to cite a publication.

The need to measure research performance is largely driven by the necessity to make funding decisions, but citation metrics are also used in some university ranking methodologies and may be used when benchmarking institutions. The publications analysed are usually, but not exclusively, journal articles. Traditionally, research has been judged by other scholars in the same research field; by expert review, more widely known as peer review. Measuring the strength of peer review through citation counts allows funders who are not subject experts to make informed decisions, although this approach does have its limitations.

Responsible metrics

There is a wide range of discussion and activity in the area of research metrics and impact measurement to ensure that measures are used responsibly and that stakeholders experience a level playing field.

Some of the key themes in responsible metrics include:

  • Avoid and challenge JIF as a measure of quality of individual research articles (DORA 1, 6, 7, 18, Metric tide 4, 8)
  • Quantitative evaluation should support qualitative expert assessment, assessment should be based on scientific content rather than metrics (DORA 3, 15, Leiden 1, 7, Metric Tide 1, 17a)
  • Consider the value and impact of all research outputs (including books, conference papers, datasets and software) (DORA 3, Leiden 6)
  • Move towards range of article based metrics (DORA 7, 17, Metric Tide 4, 8)
  • Responsible authorship practices, relating to the specific contributions of authors (DORA 8)
  • Measure performance against the research missions of the institution, group or researcher (Leiden 2, 3, Metric Tide 4)
  • Transparency of how metrics are generated, explained, and of use of metrics by decision makers (Leiden 4, 5, Metric Tide 6, 7, 9)
  • Normalized indicators are required (Leiden 6)
  • h index varies by discipline and is database dependent (Leiden 7)
  • Avoid false precision and use multiple indicators (Leiden 8, 9)
  • Responsible use of metrics is important from an equality and diversity perspective (Metric Tide 2, 3)
  • Advocate the use of unique identifiers such as ORCID iDs, ISNIs (for institutions), DOIs (Metric Tide 3,10, 11, 12, 13)

Sources of citation metrics data

1.   Clarivate provides Journal Citation Reports and the Web of Science. This was the original source for research indicators.

2.   Scopus, an Elsevier product. The free service SCImago, uses Scopus data to generate journal indicators.  Elsevier’s SciVal product builds on Scopus data to provide a research analytics package.

3.   Google Scholar data is used by the website Harzing’s Publish or Perish (POP)

This table (PDF - 40KB)
gives more detailed information about each service. 

Sources of altmetric data at the University of Birmingham

1. Plum metrics (embedded within Scopus and EBSCO databases)

2. Altmetric.com - add the bookmarklet to your toolbar to access altmetric attention information for any paper with a DOI.

Some useful indicators

Journal Impact Factor (JIF)

JIF is a journal indicator and comes from Clarivate's Journal Citation Reports.  It indicates the average number of citations received for a paper in a publication. It can help inform publication strategy but should not be used to measure the research quality of an article.

H index

H index is an author indicator and is provided by Clarivate's Web of Science, Scopus and Google Scholar.  It attempts to distil an author's research activity into a single number. It may be requested as part of researcher profile information, but should be used with caution when comparing researchers in different disciplines and at difference career stages.

Field Weighted Citation Impact

FWCI is an article level metric provided by Scopus. It calculates the ratio of citations received relative to the expected world average for the subject field, publication type and publication year.

Percentile benchmark

Percentile benchmark is an article level metric provided by Scopus.  It complements the Field Weighted Citation Index by indicating which percentile a paper falls within, with respect to the number of citations it has received. A score of 99% indicates that a paper is in the top 1% for its discipline, age and publication type.

Plum metrics

Plum metrics are article level metrics which are provided by Scopus and EBSCO bibliographic databases. A range of indicators covering usage (views and click throughs), captures (e.g. reference managers) mentions and (blogs, news, comments, reviews, wikipedia) social media are provided.  Clicking on the side bar next to publication details on Scopus will give access to the metrics.

Training and guidance

The Research Skills Team offers training and support on research indicators through its 'Influential Researcher' programme

For more information please contact the Research Skills Team.