With the citation benchmarking metrics, article performance is measured by comparing the article in question with the "average article" within the same field or journal. For some of these metrics, the calculation is made for you while other metrics require you to do the math and manually make the assessment.
The following metrics are provided for citation benchmarking:
The Field-Weighted Citation Impact (FWCI) score comes from the Scopus database and shows how the article's citation count compares to similar articles in the same field and timeframe.
In situations where article performance is being judged across a variety of subject areas each with different citation behaviors.
To get an articles's FWCI:
About the Relative Citation Ration (RCR)
iCite provides analysis for articles appearing in PubMed. Citation data are drawn from several data sources: PubMed Central, European PubMed Central, CrossRef, and Web of Science. iCite includes the Relative Citation Ratio (RCR) for an article that is a citation-based metric developed by NIH.
To Get the RCR:
For details about the information on the results page (and more), click on the Help link in the upper right corner of the screen.
CiteScore is a relatively new (December 2016) product from Elsevier, using citation data from the Scopus database to rank journals. In this metric you compare an article's citation count to what CiteScore says would be expected of the average article in this journal.
How to get the comparison between the article and the journal's CiteScore.
In this example, we want to compare an article's performance in 2015 to its journal's 2015 CiteScore; because the 2015 CiteScore is derived from the citation counts of articles from 2012 -2014, to keep the comparison as "apples to apples", the article we pick must have been published from 2012 through 2014. (If you want to compare an older article, use a CiteScore from a year that is no more than 3 years past the article's publication date. Because CiteScores go back only to 2011, articles published prior to 2008 cannot be assessed by this method.)
Lascaris E., Hemmati M., Buldyrev S.V., Stanley H.E., and Angell C.A. Search for a liquid-liquid critical point in model of silica. Journal of Chemical Physics, 140 (22), article number 224502, 2014.
The Field Citation Ratio is calculated for all publications in Dimensions which are at least 2 years old and were published in 2000 or later. It is calculated by dividing the number of citations a paper has received by the average number received by documents published in the same year and in the same Fields of Research (FoR) category. Values are centered around 1.0 so that a publication with an FCR of 1.0 has received exactly the same number of citations as the average, while a paper with an FCR of 2.0 has received twice as many citations as the average for the Fields of Research code(s).
In order to calculate an FCR score for an article, a minimum number of 500 articles need to be categorized in the applicable FoR category for the year in which the article was published. If an article is categorized in more than one FoR code, then only those codes which meet the threshold of 500 labeled articles in that year will be used to calculate the FCR score.
Compared to the Relative Citation Ratio (RCR), the FCR uses a more tangible definition of a field, i.e. you can get a list of all articles in a specific FoR code - the same is not possible for RCR where the a field is relative to the publication in question.
An article's citation count may be compared to where it falls on the journal's citation distribution curve (CDC). Unfortunately, most journals do not provide a citation distribution curve but a simple citation distribution "list" can be determined easily through the Scopus database. When comparing an article's citation count to the citation distribution of a journal, we recommend using articles from the same year; so if you are assessing a 2014 article, find the citation distribution for articles published in that journal for 2014.
Because this metric is not using an "average" as the benchmark (as you get with CiteScore or the Journal Impact Factor) there is less chance of an outliers' effect artificially inflating the benchmark.
Determining a Journal's Citation Distribution via Scopus:
If you are comfortable with creating graphs within an Excel file, you may also see the actual citation distribution curve and plot your article's citation count on the curve. See this article for how to do it using data from the Web of Science:
The ASU Library acknowledges the twenty-two Native Nations that have inhabited this land for centuries. Arizona State University's four campuses are located in the Salt River Valley on ancestral territories of Indigenous peoples, including the Akimel O’odham (Pima) and Pee Posh (Maricopa) Indian Communities, whose care and keeping of these lands allows us to be here today. ASU Library acknowledges the sovereignty of these nations and seeks to foster an environment of success and possibility for Native American students and patrons. We are advocates for the incorporation of Indigenous knowledge systems and research methodologies within contemporary library practice. ASU Library welcomes members of the Akimel O’odham and Pee Posh, and all Native nations to the Library.