Login to LibApps

Citation Research and Impact Metrics: Citation Benchmarking

Index

Introduction to:
Article Assessments
Author Assessments
Country Assessments
Journal Rankings

Metrics:
Altmetric Score

Citation Benchmarking
Citation Counts for: 
---Articles
---Authors
---Countries
Citation Distribution, see Citation Benchmarking
CiteScore 
Collaboration

Eigenfactor Score, see Other Journal Rankings
ERIH Plus, see Other Journal Rankings

Field-weighted citation impact (FWCI), see Citation Benchmarking
FWCI, see Citation Benchmarking

Google Scholar (Journal) Metrics, see Other Journal Rankings

Harzing, see Other Journal Rankings
Hirsch-index
h-index

iCite for:
---Articles, see Citation Benchmarking
---Authors

JIF
Journal Impact Factor

NIH ranking, see iCite

Publish or Perish software, see Citation Counts for Authors: Other Sources

RCR, see iCite
Relative Citation Ratio, see iCite

Scimago Country Rank (SCR)
Scimago Journal Rank, see CiteScore 
SJR, see CiteScore
SNIP, see CiteScore​
​Source Normalized Impact per Paper, see CiteScore

Usage Counts

What You Need to Know

With the citation benchmarking metrics, article performance is measured by comparing the article in question with the "average article" within the same field or journal.  For some of these metrics, the calculation is made for you while other metrics require you to do the math and manually make the assessment.

Important Points:

  • Decide what's more important for your situation - measuring against an average article within the same field (FWCI, RCR) or against the average article within the same journal (CiteScore, JIF, CDC). Or perhaps you'd like to measure one of each.
  • Compare apples to apples; do not mix scores taken from one metric and compare to a score from a different metric.  Even if the numerical scheme seems identical, the scores come from different data.   
  • When using the "you do the math" metrics (CiteScore, JIF, CDC), make sure you are using the data from the appropriate year. 
  • Even though these metrics allow comparisons to other articles in the same field, same journal, and/or same publication year, they still are not a measure of quality because articles may be cited for negative reasons as well as positive. Find out why an article is being cited. 

The following metrics are provided for citation benchmarking: 

  • FWCI (Article's Field-Weighted Citation Impact Score from Scopus) 
  • RCR (Article's Relative Citation Ratio from iCite) 
  • CiteScore (Article's Citation Count vs Journal's CiteScore​)
  • JIF (Article's Citation Count vs Journal's Impact Factor)
  • CDC (Article's Citation Count vs the Journal's Citation Distribution Curve)

Click on the tabs in the box below to see a description of each metric and how to get the scores. 

Instructions

The Field-Weighted Citation Impact (FWCI) score comes from the Scopus database and shows how the article's citation count compares to similar articles in the same field and timeframe.  

Important Points:

  • Because the FWCI comes from the Scopus database, only documents within the database (1996 to the present) will have a FWCI.
  • Because the FWCI includes field normalization, theoretically, the score should be a better indicator of performance than a raw citation count. 
  • A score of 1.00 means the article is cited as it would be expected, greater than 1.00 the article is doing better than expected, and less than 1.00 the article is underperforming.  
  • The exact calculation for the FWCI was not found on the Scopus or Elsevier websites but a likely assumption is that it is similar to how the CiteScore journal metric uses field normalization in its formula.   

Uses: 

  • In situations where article performance is being judged across a variety of subject areas each with different citation behaviors.  


To get an articles's FWCI: 

  • Go to the Scopus database
  • Using the article's title, find the record for that article in the database
  • Click on the article's title to see the full record 
  • On the full record, go to the bottom of the right-hand column and look at the Metric box for: 
    • Number of citations,
    • The percentile that number represents, and 
    • Field-Weighted Citation Impact Score (FWCI)


       
  • In this case, the article was cited 17 times which put it in the 94th percentile of articles within its field and its FWCI is 5.75 which indicates that it received almost 6 times the number of citations the average article in this field received. 

About the Relative Citation Ration (RCR)

iCite provides analysis for articles appearing in PubMed.  Citation data are drawn from several data sources: PubMed Central, European PubMed Central, CrossRef, and Web of Science.  iCite includes the Relative Citation Ratio (RCR) for an article that is a citation-based metric developed by NIH. 
 

Important Points:

  • The RCR is calculated as the citations per year for an article, compared to the expected number of citations per year received by NIH-funded papers in the same field and year.   The "expected number of citations" is both the unique and controversial aspect of this metric.  The research field is defined as all the references in the articles that cite the article under interest. An "expected citation rate" is then determined using the citation counts for the items in the field.  An article with an RCR of 1.0 has received the same number of citations per year as the median NIH-funded paper in the same field; an article with an RCR of 2.0 has received twice as many citations per year.  For more details on RCR calculation, see the references below
  • The article must be covered in PubMed to be analyzed by iCite.  
  • To analyze a set of citations, the set must be no larger than 1,000 items. 
  • Coverage starts with articles published in 1995.
  • Determination of the "expected citation rate" is controversial. 
     

Uses:  

  • Most useful in academic units with large percentage of NIH funding and/or publications appearing in health sciences journals. 
     

References:

 

To Get the RCR: 

  • Go to iCite
  • Enter the title of the article in the "Search PubMed" box


     
  • Click on the blue "Process" button at the bottom of the screen
  • The results page will display
  • The weighted RCR is near the top and far right of the table.  

     

For details about the information on the results page (and more), click on the Help link in the upper right corner of the screen.

CiteScore is a brand new (December 2016) product from Elsevier, using citation data from the Scopus database to rank journals.  In this metric you compare an article's citation count to what CiteScore says would be expected of the average article in this journal. 

Important Points: 

  • The CiteScore goes back to 2011 so only relatively recent articles can be assessed by this metric. 
  • The article must be in the Scopus database.
  • The article must have been published in the three years immediately preceding the journal's CiteScore.  For example, if you are using the journal's CiteScore from 2015, the article must have been published in 2012, 2013 or 2014. 
  • The CiteScore includes traditionally non-cited "front matter" in its calculations, therefore, journals with substantial front matter have lower scores when compared with their JIF (which does not include front matter in its calculation). 
  • Arguably, this is an "apples to oranges" comparison; you are comparing article performance (actual citation count) to journal performance (average citation count), so use with caution.   

How to get the comparison between the article and the journal's CiteScore.

In this example, we want to compare an article's performance in 2015 to its journal's 2015 CiteScore; because the 2015 CiteScore is derived from the citation counts of articles from 2012 -2014, to keep the comparison as "apples to apples", the article we pick must have been published from 2012 through 2014.   (If you want to compare an older article, use a CiteScore from a year that is no more than 3 years past the article's publication date.  Because CiteScores go back only to 2011, articles published prior to 2008 cannot be assessed by this method.)

Example:
Lascaris E., Hemmati M., Buldyrev S.V., Stanley H.E., and Angell C.A. Search for a liquid-liquid critical point in model of silica. Journal of Chemical Physics, 140 (22), article number 224502, 2014. 
 

  1. Go to CiteScore
  2. In the Search Title box put the title of the journal and set the year to 2015. 
  3. From the results we can see that the CiteScore for the "Journal of Chemical Physics" in 2015 was 1.98, indicating that the average article published in that journal from 2012-2014 was cited just a little under 2 times in publications that came out in 2015. 


     
  4. Now go to the Scopus database to see how many times the article in question was cited by 2015 publications.
  5. Enter the title of the article in the search box; we recommend enclosing the title in quotes to make the search more efficient.  If the title is long, consider entering just the first 5-7 words.
  6. On the results list, look in the far-right hand column to see how many times this article has been cited since it was published.  Click on that number to bring up the list of those citing publications. 
  7. Because we are comparing this article's performance with the journal's 2015 performance, we must limit our list of citing articles to just those that were published in 2015.  To do that, go to the left-hand column and under "Year" click on 2015 and then the "Limit To" button near the top of the column. The list of citing articles is reduced to 5. 


     
  8. So the article under assessment was cited more than twice the amount expected (5 vs. 1.98) of the average article in this journal for 2015.  
  9. Now is the time for adding the quality assessment.  Is the article being cited for positive or negative reasons?  Who wrote the articles that are citing the original article?  Are the citing authors well-respected members of the field?  Do you give the same weight to self-citations as you do to external citations?  

The Journal Impact Factor (JIF) is the original journal ranking metric and it uses citation data from the Web of Science database.  In this metric you compare an article's citation count to what  the Web of Science (or Journal Citation Reports) says would be expected of the average article in this journal (aka, the Journal Impact Factor or JIF). 

Important Points: 

  • The JIF score goes back to 1997. 
  • The article must be in the Web of Science database.
  • The article must have been published in the two years immediately preceding the journal's JIF.  For example, if you are using the journal's JIF from 2015, the article must have been published in 2013 or 2014. 
  • The JIF does not include "front matter" in its calculations, only "substantial, research" articles. Some journal editors consider this metric's selection of what is research vs front matter to lack transparency, therefore making it difficult to reproduce the JIF scores.   Journals with a substantial amount of front matter have higher JIF scores than they do CiteScores; consequently, fewer articles within such a journal meet the "expected citation count" when using the JIF as a benchmark.   
  • Arguably, this is an "apples to oranges" comparison; you are comparing article performance (actual citation count) to journal performance (average citation count), so use with caution.   

 

How to get the comparison between the article and the journal's JIF.

In this example, we want to compare an article's performance in 2015 to its journal's 2015 JIF; because the 2015 JIF is derived from the citation counts of articles from 2013 -2014, to keep the comparison as "apples to apples", the article we pick must have been published from 2013 or 2014.   (If you want to compare an older article, use the Journal Citation Reports database to locate a JIF from a previous year that is no more than 2 years past the article's publication date.  Because the Journal Citation Reports database ony keeps JIF scores for a decade, articles published more than 12 years prior to the current JIF cannot be assessed by this method.)

 

For this example we'll use ...

Lee, C., Kang, J. J., von Ballmoos, C., Newstead, S., Uzdavinys, P., Dotson, D. L., & Drew, D. A two-domain elevator mechanism for sodium/proton antiport. Nature 501(7468): 573+, 2013. DOI: 10.q038/nature12484

 

  • Find the article in the Web of Science database. Click on the title to see the full record. 
     
  • Click on the "View Journal Information" link to find the journal impact factor for 2015 (38.138)




     
  • In the upper right hand corner, click on the "Times Cited"


     
  • On the results list page, use the Refine by publication year option to limit to only the items from 2015.  Remember: the 2015 JIF is only measuring the citations from articles that were published in 2015 so we must reduce the list of total citations (in this case 58) to just those from 2015.  


     
  • This resulted in a list of 22 items published in 2015 that cited the 2013 article in Nature.  Because the 2015 Journal Impact Factor for Nature was 38.138 (i.e., the average article published in Nature during 2013/2014 was cited 38.138 times in 2015) this article under-performed (22 citations versus the average of 38.183).
     
  • Now is the time for quality assessment.  Is the article being cited for positive or negative reasons? Did this article truly under-perform? Had they published in a highly ranked chemistry or physics journal appropriate to this topic (or even an above average multi-disciplinary title in those fields), would the article have performed "better" for those journals, even though fewer people may have seen the article or cited it?

Note: the Web of Science database only displays the most recent JIF.  If you want to use a JIF from a previous, use the Journal Citation Reports database to get the previous JIFs. 

An article's citation count may be compared to where it falls on the journal's citation distribution curve (CDC).  Unfortunately, most journals do not provide a citation distribution curve but a simple citation distribution "list" can be determined easily through either the Web of Science or the Scopus databases.  When comparing an article's citation count to the citation distribution of a journal, we recommend using articles from the same year; so if you are assessing a 2014 article, find the citation distribution for articles published in that journal for 2014.  

Because this metric is not using an "average" as the benchmark (as you get with CiteScore or the Journal Impact Factor) there is less chance of an outliers' effect artificially inflating the benchmark. 

 

Determining a Journal's Citation Distribution via Scopus or Web of Science: 

  1. Go to either Scopus or Web of Science 
  2. Find the article and get its citation count (searching by the article's title is usually the most efficient.)
  3. Go back to the database's main search page and do a search for the journal title and change the search field to "Source Title" (Scopus) or "Publication Name" (Web of Science) 
  4. On the results list, go to the left-hand column and look under "Source Title" to see what Journal Titles the search has retrieved.  If it has retrieved other titles in addition to the title you want, click on the desired title and then click "Limit To" (Scopus) or "Refine" (Web of Science)
  5. Once the results list refreshes, return to the left-hand column under "Year" (Scopus) or "Publication Years" (Web of Science) and select the year of the article you are assessing. Again, click "Limit To" (Scopus) or "Refine" (Web of Science)
  6. Once the results list refreshes, again, return to the left-hand column this time under "Document Type" (Scopus) or "Document Types" (Web of Science) and select "Article" and "Review".  Again, click "Limit To (Scopus) or "Refine" (Web of Science).
  7. Once the results list refreshes, go to the header bar above the list and set the list sort option to "Cited By" (Scopus) or "Times Cited - Highest to Lowest" (Web of Science).
  8. You now have a list ordered by citation count of all the articles in the same year as your article of interest. By checking where the article falls within the list tells you whether the article performed better, worse or on average with the articles in the same journal for that same year.  A good way to quantify this placement is to determine what percentile the article achieved (ex., its citation count is higher than X% of the articles published in that journal during that year)   

Although the numbers will be close, don't expect to get exactly the same numbers from both of these databases as they get their data from different sources and have different definitions of "Document Types".   Do not mix scores from the different databases - if you are assessing more than one article, compare Scopus data to Scopus data and not Scopus data to Web of Science data. 

If you are comfortable with creating graphs within an Excel file, you may also see the actual citation distribution curve and plot your article's citation count on the curve.  See this article for how to do it using data from the Web of Science ...

Lariviere, Vincent; Kiermer, Veronique; MacCallum, Catriona J.; McNutt, Marcia; Patterson, Mark; Pulverer, Bernd; Swaminathan, Sowmya; Taylor, Stuart; and Curry, Stephen.  A simple proposal for the publication of journal citation distributions.  bioRxiv preprint first posted online Jul. 5, 2016.  doi: http://dx.doi.org/10.1101/062109

Loading

Hours and Locations