Decision makers in the research landscape often use research metrics, particularly citation metrics (bibliometrics), to gain insights into the performance of articles, authors, publications and institutions. There are large differences across the disciplines in relation to cultures of citation, authorship and publication rates as well as various philosophical discussions about the merits of using bibliometrics. To address these, normalized metrics have been developed and there is a range of activity in the area of responsible metrics.
This page will help you to take a responsible approach to presenting your research in its best light. It highlights citation metrics and other indicators that may be used as evidence in your promotion or funding application, or to identify your impactful papers. For best results, we recommend curating your online publication profile before generating research metrics.
Research metrics for your portfolio include citation metrics and altmetrics.
Citation Metrics for your portfolio
Citation metrics ("bibliometrics") analyse the number of times other researchers refer to (cite) a given publication within scholarly publishing. Citation metrics can be generated on author, article or publication level. Smart citation platforms such as Scite and Semantic Scholar use artificial intelligence to give more context to citations.
Author-level citation metrics
You may be asked for author-level metrics as part of a promotion or other process. These can be obtained from Scopus, Web of Science and Google Scholar. Examples include:
- Citation count: Also known as Sum of times cited, this is a simple measure of the number of citations for an article, researcher or publication. Author-level citation counts are available on Web of Science, Scopus and Google Scholar/Publish or Perish.
- Beamplots: Clarivate launched their Author Impact Beamplots on the Web of Science platform in 2021. Rather than a number, beamplots provide a graphical representation of the citation performance of an author's publications, these are built on percentiles - Clarivate's article-level metric that normalizes for discipline, type and age of paper.
- h-index: The h-index is a score that attempts to summarize a researchers impact and productivity with a single number. It is controversial because it tends to favour late-career researchers in disciplines with fast publication rates and cultures of multiple authorship.
Obtaining Author-level citation metrics
Web of Science
- Access the Web of Science Core Collection, and from the search page, click to search 'Authors'.
- Complete the search form and click 'Search'. You may need to refine by subject category and organization, and to select one or more author record(s) from the results list.
- The author profile page gives a list of publications and the Author Impact Beamplot Summary, with a link to the View Full Beamplot. Scroll down a little to see the 'Citation Network' box on the right of the page - this includes sum of times cited, h-index and a link to the citation report.
- Go to Google Scholar and search for a (or create your own) Google Scholar profile page. You will see the number of citations, h-index and other metrics displayed on the right of the page.
- If you are looking for author metrics for a researcher without a Google Scholar Profile, you will need to use Harzing's Publish or Perish.
- Access Scopus and undertake an author search.
- Select all relevant author results by ticking the boxes, then click to “View Citation Overview” – author metrics will be displayed on the right of the page. You may need to alter the date range to capture all of the publications.
- Scroll down to see a table listing publications and the number of citations for each, the top row gives totals for all publications.
It is important to check your profile and scores on each service to ensure you are as well represented as possible. When quoting your author metrics, ensure you acknowledge the source of your score, and the date it was generated – different databases will give different scores due to their differing coverage. It may be worth augmenting your author metrics with some of the article level metrics detailed below.
Article-level citation metrics
There are several article-level metrics that you may find useful to consider for your portfolio, these are available from the three key platforms, Web of Science, Scopus and Google Scholar.
Citation counts can be obtained from the three key platforms, Web of Science, Scopus and Google Scholar, and are usually displayed in the results list. The citation count will depend on the coverage of the database used. Whilst citation counts can be useful, they do have their limitations in that they do not give an idea of whether a citation count is high or low for that discipline, age of paper, or type of paper (e.g. review articles tend to generate more citations). There are other metrics that can give more context to the performance of your paper, and these are listed below.
Normalized article-level metrics
Normalized metrics address the citation differences that arise due to disciplinary cultures of multi-authorship, length of reference lists, speed of publication; and also to take into account how some types of publications tend to receive more citations. Normalized article-level metrics are available on Web of Science and Scopus.
Normalized metrics on Web of Science
Web of Science offers 3 normalized article metrics - Highly Cited, Hot Papers and Percentiles.
- Highly Cited is given to papers in the top 1% in each of the 22 Web of Science subject categories, per year, and displays as a small golden trophy in the results list and on the article page.
- Hot papers are in the top 0.1% of papers by citations for field and age, and display as a small red flame in the results list and on the article page.
- Percentiles are a little trickier to find, and are currently only accessible by accessing the full author beamplot (see above), and hovering over the purple points. A higher percentile value means better performance, so if a paper has a percentile of value of 99, then it is in the top 1% of most cited publications of the same document type, year and subject category.
Normalized metrics on Scopus
Scopus's normalized article-level metrics are available in the Scopus Document Details page, by clicking on 'View all metrics', and include the following:
- Field weighted citation impact (FWCI) gives a score based on the number of citations received, compared to expected number of citations for similar documents. A FWCI score of 1 indicates that a paper is receiving the expected number of citations for a paper of that age, discipline and type; higher than 1 means that your paper has a higher than average number of citations.
- Percentile benchmark complements the FWCI and compares items of the same age, subject area and document type over 18 months. 99th percentile is high, and indicates an article in the top 1% globally.
Smart citation metrics
Smart citation platforms are newer services using artificial intelligence to gather more contextual information about how a paper has been cited. Researchers can use them to gain extra insight about how their publications have been taken forward in the scholarly landscape:
- Scite enables you to see how many of the citations are supporting, disputing or mentioning.
- Semantic Scholar shows how many papers cite the background, results, methodology etc, and how many are "highly influenced papers" - those which have cited your article several times.
Altmetrics for your portfolio
A type of article level metric, altmetrics score and link to the attention a research output has received in social media and other platforms. This gives useful information about the attention your work is receiving outside of scholarly publishing, and can also serve as early indicators of possible intentions to cite. The altmetric summaries allow you to see the details of the attention you have received, and in many cases link out to see the source. Some of the altmetric information you can obtain includes:
- Scholarly activity online - how many times people have clicked through from bibliographic databases to your abstract or full text (on Plum); or have added papers to their Mendeley, or other scholarly collaboration network account. This may indicate early intentions to read or possibly cite your work.
- Scholarly commentary online can include mentions in blogs, news and policy documents, giving a powerful indication of where your research has been discussed or quoted beyond the academic community.
- Social Media mentions covers activity on Facebook, Twitter and Google+. This may indicate where there is interest in specific community groups, or more public engagement.
University of Birmingham researchers can obtain altmetric information from two sources:
- Plum metrics are embedded in the Scopus and EBSCO databases, which you can access via FindIt@Bham. Click on the Plum Print to access the PlumX Metrics Detail page – these are available in the Scopus Document Details page, or the EBSCO Search Results List.
- Altmetric.com. You will need to add the Altmetric It! Bookmarklet to your browser toolbar. To do this:
- Go to Altmetric for Researchers and select 'Tools'. Click to 'Learn more' for the Altmetric bookmarklet.
- Complete the form, then click and drag the bookmarklet onto your toolbar.
- Search for a key paper in your area on your usual service, then linkout to the full text via a DOI link. Click on the 'Altmetric it!' bookmarklet on your toolbar to generate the Altmetric donut. Clicking on the score generates an altmetric summary including which social media channels have picked up on the paper.
Note: it is advised to use the Altmetric attention score only as a preliminary indicator of the amount of attention an item has received. It can help to identify where there are ‘mentions’, and signifies where an item has achieved a high level of engagement.
A move away from traditional author metrics?
Before a publication appears with author names attached, a huge amount of work goes into the associated research project. Contributions will have been made by many people whose name won’t appear on the paper as an author. Initiatives such as CRediT (Contributor Roles Taxonomy) are being explored, to give recognition to people in roles other than principal investigator and lead researchers.
Library Services offers online training via our Influential Researcher Canvas Course
For one-to-one appointments and bespoke workshops, contact the Research Skills Team in Library Services.