Choosing a good metric

Metrics in themselves are inherently neither good nor bad.  The context in which they are used needs to be appropriate to ensure that resulting decisions made on their strength are valid.  Using a number of different metrics to triangulate a judgement is considered good practice.  Peer review of research outputs alongside the use of metrics is still considered the gold standard.

When using Web of Science or Scopus as your data source, it is important to curate your online identity to ensure that all relevant papers and citations are attributed to your profile.

Some considerations when choosing metrics

  • Disciplines vary in their publication practices, and citation patterns differ as a result.  Research outputs are cited more frequently in certain disciplines.  It is important to compare researchers within their discipline areas to avoid erroneous benchmarking.  Also bibliometrics focus on measurement of citations, mostly in journal articles. Disciplines such as arts and humanities and social sciences rely less on journal publications.
  • Metrics have the potential to be ‘gamed’, so that self-citing and the citing of close colleagues can artificially boost citation counts.
  • A citation in itself is not automatically a measure of prestige.  A paper may be cited because it is an example of ‘bad’ research.
  • Sources used to supply data for citation counting differ in coverage, indexing different journals – results will vary depending on the data source used.
  • Citation counting by itself will automatically favour experienced researchers over those at an earlier stage in their career as they will have produced a greater number of outputs.  Only researchers at similar career stages should be compare.

Commonly used metrics  

There are a number of metrics commonly used to assess the impact of research.  The strengths and weaknesses of each can be summarised as follows:

Metric

What it measures

Strength

Weakness

Scholarly output

Shows total number of outputs published

Can be used for an individual, group or institution

Measures productivity rather than impact

Citation count (of a publication)

Number of times an output has been cited by others

Easy to measure using sources such as Web of Science, Scopus and Google Scholar

Will be lower for papers published more recently.

Doesn’t take account of negative citations or gaming

Cited publications

The ‘citability’ of a set of publications: how

many of this entity’s publications have received at least 1 citation?

Easy to measure using sources such as Web of Science, Scopus and Google Scholar

 

Will be lower for papers published more recently.

Doesn’t take account of negative citations or gaming

Number of citing countries

The number of distinct countries have that an entity’s publications have received citations

 

Indicates the geographical

visibility of an entity’s publications

 

 

 

Field-weighted citation impact (SciVal)

The ratio of citations received relative to the expected world average for the subject field, publication type and publication year.

Shows how the citations received by an entity’s publications compare with the world average

 

 

H-index

A meeting of productivity (Scholarly Output) and citation impact (Citation Count) of an entity’s publications.

Easy to measure using sources such as Web of Science, Scopus and Google Scholar.

A single number determines score

 

 

Varies according to data source used.

Ignores small numbers of highly cited papers.

Will be lower for papers published more recently.

Doesn’t take account of negative citations or gaming

Collaboration

The extent to which an entity’s publications have international, national, or institutional co-authorship, and single authorship.

Gives an insight into whether collaboration in a discipline will enhance impact

 

 

Relies on the assumption that past collaborations will remain beneficial

Collaboration impact

The citation impact of an entity’s

publications with particular types of geographical collaboration

 

Shows how many citations an entity’s

internationally, nationally, or institutionally co-authored publications receive, as well as those with a

single author

 

Academic-corporate collaboration

The citation impact of an entity’s publications with or without both academic and corporate

affiliations

Gives an insight into whether commercial collaboration in a discipline will enhance impact

 

Academic-corporate collaboration impact

The citation impact of an entity’s

publications with particular types of corporate collaboration

 

Shows how many citations an entity’s publications receive when they list both academic and corporate affiliations, versus when they do not

 

Altmetric attention score

An indicator of the amount of attention that an output has received.

The score is a weighted count derived from an automated algorithm, and represents a weighted count of the amount of attention picked up in each type of source, (e.g. score is higher for news items than tweets) 

Measures the attention that an output receives, without it needing to be cited in academic journals.  Shorter lead time than traditional metrics.

 

 

 

Measures attention, not quality. Only tracks public attention.  No discipline adjustment added.

 

Views per output

Number of times an output has been accessed online

Doesn’t rely on output being cited – shorter lead time

Cannot prove that output has actually been read

Downloads per output

Number of times an output has been downloaded online

Doesn’t rely on output being cited – shorter lead time

Cannot prove that output has actually been read

Publications in top journal percentiles

Indicates the extent to which an entity’s publications are present in the most-cited journals in the

data universe: how many publications are in the top 1%, 5%, 10% or 25% of the most-cited

journals ?

Arguably indicates the ‘academic impact’

 

 

 

 

 

Assumes that publication in ‘high impact’ journals  is synonymous with quality

Journal Impact Factor

The average number of times articles from the journal published in the past two years have been cited in the Journal Citation Reports (JCR) year. The Impact Factor is calculated by dividing the number of citations in the JCR year by the total number of articles published in the two previous years.

Arguably indicates the ‘academic impact’.  Measures Journal level impact

 

 

 

 

Assumes that publication in ‘high impact’ journals  is synonymous with quality.

Article level impact is not measured.

Score can be skewed by a single highly-cited paper [view JIF over a period of years to check that a high JIF is norm and not anomaly]

SJR (SCImago Journal Ranking)

A measure of scientific influence of scholarly journals that accounts for both the number of citations received by a journal and the importance or prestige of the journals where such citations come from.

Arguably indicates the ‘academic impact’.  Measures Journal level impact

 

 

 

 

 

Assumes that publication in ‘high impact’ journals  is synonymous with quality.

Article level impact is not measured.

SNIP (Source Normalised Impact per Paper) - Elsevier

Measures contextual citation impact by weighting citations based on the total number of citations in a subject field.

Adjusts disciplinary differences and offers a normalised benchmark

 

 

 

 

IPP (Impact per Publication)

Measures the ratio of citations in a year (Y) to scholarly papers published in the three previous years (Y-1, Y-2, Y-3) divided by the number of scholarly papers published in those same years (Y-1, Y-2, Y-3).

Uses a citation window of three years which is considered to be the optimal time period to accurately measure citations in most subject fields.

 

 

 

 

Not normalised for the subject field and therefore gives a raw indication of the average number of citation a publication published in the journal will likely receive. When normalised for the citations in the subject field, the raw Impact per Publication becomes the Source Normalised Impact per Paper (SNIP).

Eigenfactor

Rating of the total importance of a scientific journal. Journals are rated according to the number of incoming citations, with citations from highly ranked journals weighted to make a larger contribution to the eigenfactor than those from poorly ranked journals.

Thought to be more robust than the impact factor metric,which purely counts incoming citations without considering the significance of those citations. For a given number of citations, citations from more significant journals will result in a higher Eigenfactor score.

Originally Eigenfactor scores were measures of a journal's importance; it has been extended to author-level.  It can also be used in combination with the h-index to evaluate the work of individual scientists.

Assumes that publication in ‘high impact’ journals  is synonymous with quality.