Research indicators aim to quantify and monitor the importance of published research by analysing the number of times other researchers refer to (cite) a given publication. The publications analysed are usually, but not exclusively, journal articles. Individual articles can be analysed by the number of times they are cited. Journals can also be given an impact factor based on the number of citations made to articles within them. Author indicators are becoming more prevalent, although they have their limitations. Comparisons of publications from different institutions can also be made.
The need to measure research performance is largely driven by the necessity to make funding decisions. Traditionally, research has been judged by other scholars in the same research field; by expert review, more widely known as peer review. Measuring the strength of peer review through citation counts allows funders who are not subject experts to make informed decisions.
Commonly used indicators
There are two types of indicators commonly used in the UK.
- Are used to compare journals in a given academic discipline.
- Give rise to a ranking of journals with the most highly cited journal at the top of the ranking list.
- Rankings vary depending on the exact formula used, and also on the source data which is used, so that different measures may have different journals at the top of the ranking.
- The Journal Impact Factor (JIF) from Thomson Reuters is the most common indicator in this category.
- See the MyRi Worksheet Bibliometrics for Journal Ranking
- See the MyRI worksheet Bibliometrics for Personal Impact.
Limitations of journal indicators
- Based on a mean score, so can be skewed by a single highly cited paper (therefore it is recommended that you check the JIF of a journal over several years to check that a high ranking is a norm rather than an anomaly.
- It is not possible to compare journals in different disciplines due to different citation patterns between disciplines, e.g. some disciplines use fewer citations than others, e.g. Economics papers tend to use fewer citations than Medicine papers.
- The timespan used is arbitrary, with 2, 3 and 5 years used; different disciplines may need different timescales.
- Review journals (that is, journals consisting of review articles) have high numbers of citations
- Abuses of the system e.g. self-citations.
- Not relevant to disciplines where outputs are not journal articles, e.g. books.
- Not relevant to disciplines where it is not usual practice to cite extensively e.g. Economics.
- Negative citations are counted in support (that is, if an author criticises an article, this reference is counted in the same way a supportive reference is counted).
- Are used to compare researchers, but comparisons can only be made in a given academic discipline due to differing citation patterns.
- Based on the number of articles a researcher has published, and the number of times these articles have been cited by others.
- The average number of citations per author is a fairly crude way of measuring a person’s research impact, so there are various measures which try and overcome this limitation.
- The most commonly used author metric is the h-index, MyRI datasheet The h-index.
- The h-index is dependent on the data used to calculate it.
Limitations of the h-index
- Highly cited articles are likely to be the most important, but their importance is reduced.
- Favours authors at the middle or end of their careers.
- Ignores small numbers of important articles.
- Incomplete coverage by citation indexes, e.g. documents covered (e.g. books poorly covered), disciplines, foreign language material etc.
- Ignores the number of authors.
Variations on the h-index have been developed to address some of these problems, but these are not as widely used.
Sources of research indicators data
- Clarivate provide Journal Citation Reports and the Web of Science. This was the original source for research indicators.
- Scopus, an Elsevier product. The free service SCImago, uses Scopus data to generate journal indicators.
- Google Scholar data is used by the website Harzing’s Publish or Perish (POP)
All three services are freely available to staff at the University of Birmingham. Using only one of these services gives an incomplete picture. Differences between these sources include subject coverage, types of documents included, timespan covered and the metrics they calculate. This table (PDF - 40KB) gives more detailed information about each service.
There is a wide range of discussion and activity in the area of research metrics and impact measurement to ensure that measures are used responsibly and that stakeholders experience a level playing field.
Training and guidance
We offer training on research indicators through the 'Raising Your Research Profile' programme. Join us for an introductory 'Meet the Expert' session or request a bespoke session on research metrics.
A self-enrol Canvas course is available in the Canvas Gallery
MyRi Website: Measuring your research impact.This offers in depth information including an online tutorial, an overview booklet and worksheets.
For more information, contact Vicky Wallace email@example.com
0121 414 4140