Traditional research metrics such as citation count, citations per publication, and h index have a strong hold in the research assessment landscape as they are easy to describe, calculate and obtain. However, there are known problems with these as they advantage certain disciplines making interdisciplinary comparisons difficult. Normalised metrics aim to address these and other concerns by only comparing papers of the same publication year (as it takes time for citations to accrue), type (e.g. review papers tend to attract more citations than other papers) and subject area (as there are different authorship and citation cultures in different disciplines). Each paper is then assigned a metric to indicate whether a paper is generating the expected, higher or lower number of citations for a paper of that age, type or discipline. This gives more context to a paper's citation performance, "levels the playing field" and ties in with responsible metrics recommendations, e.g. principle six of the Leiden Manifesto's says “Normalised indicators are required…”.
University of Birmingham researchers can obtain normalised metrics from Clarivate's Web of Science and Elsevier's Scopus/SciVal platforms. They include:
- article-level metrics such as Elsevier's Field Weighted Citation Impact (FWCI) and Clarivate's Percentiles
- author-level metrics such as Elsevier's Publications in the top 10% most cited, and Clarivate's Median Citation Percentile
Details on how to find them are available on our research metrics for your portfolio intranet page.
Normalisation process
Although an improvement on traditional research metrics, normalised metrics are not a perfect solution either, largely due to the way that papers are assigned to subject categories. For both Elsevier's Field Weighted Citation Impact (FWCI) and Clarivate's Percentiles, article-level normalisation involves:
- Looking at the journal/publication venue rather than the article itself, meaning the disciplinary normalisation is taking place at journal, rather than article level. This can be problematic in cross- and interdisciplinary publications e.g., if a paper from a discipline with a traditionally low citation rate is published in a journal alongside papers from disciplines with higher citation rates, they will automatically be disadvantaged.
- Assigning the journal to one or more subject categories. Clarivate’s Web of Science offers around 250 subject categories, whilst Elsevier’s Scopus has 304. The number of categories is important, but more important is the process by which the journals are assigned to them. A 2016 paper by Wang and Waltman found that both Web of Science and Scopus performed well for one criterion, whilst Web of Science was significantly more accurate for another, despite having fewer subject categories. The Metric Tide report had highlighted this, “A key issue in the calculation of normalised citation impact indicators is the way in which the concept of a research field is operationalised.” A great deal of trust is placed in these mechanisms when we use normalised metrics.
Using normalised metrics
Normalised metrics have their advantages and disadvantages, and like all other metrics should not be used in isolation. The Global Research Report ‘Data categorization: understanding choices and outcomes’ comments:
“Citations are themselves value-laden constructs with social as well as research weight. Any aggregation of citation counts, subsequent management of the data through normalisation and fractionation, and choice of analytical methodology then applied, must introduce further subjective modification that moves from original information towards a stylized indicator.”
All of this strengthens the case for narrative representations of research impact, and initiatives such as the Royal Society’s Resume for Researchers (R4R) have been adopted by a range of stakeholders, including UKRI who are adopting a R4R-like narrative CV. The University of Birmingham has also adopted the idea of narrative representations, with the Birmingham Academic Career Framework promotions criteria (login required) asking researchers to complete a self-assessment matrix form, submit a narrative statement of achievement, and provide an up-to-date CV for consideration.
Developments in normalised metrics
Most normalised metrics take into account the age, discipline and type of paper, but there are arguments for including other criteria, such as collaboration and multi-authorship. Clarivate’s Collab-CNCI adds another layer to the normalisation process, where each paper is also normalised by collaboration type (ie whether there is no collaboration, domestic collaboration, or one of three levels of international collaboration).
Further help
The Library's Research Skills Team offers online training via our Influential Researcher canvas course.
For one-to-one appointments and bespoke workshops, contact the Research Skills Team in Libraries and Learning Resources.
Reference list and bibliography
- Hicks, D., Wouters, P., Waltman, L. et al. Bibliometrics: The Leiden Manifesto for research metrics. Nature 520, 429–431 (2015).
- Rowlands, I., Six weeks is a long time in bibliometrics: Stability and Field-Weighted Citation Percentile. The Bibliomagician, November 21 2019.
- Szomszor, M., Adams, J., Pendlebury, D. A., Roger, G., Data categorization: understanding choices and outcomes. ISI Global Research Report 2021.
- Waltman, L., Jan van Eck, N., Field-normalized citation impact indicators and the choice of an appropriate counting method. Journal of Informetrics 9(4), 872-894 (2015).
- Wang, Q., Waltman, L., Large-scale analysis of the accuracy of the journal classification systems of Web of Science and Scopus. Journal of Informetrics 10(2), 347-364 (2016).
- Wilsdon, J., et al. (2015). The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management.