Skip to main content
Login to LibApps


Citation Research and Impact Metrics

How to Use This Guide

Citation research, impact metrics, and research analytics are ways to attempt to assess the performance or impact of research, by analyzing how published items are cited in publications, or other forms of usage metrics (sometimes called alternative metrics) such as number of tweets. Citations can be analyzed at a variety of levels:

  • Individual journals
  • Individual articles
  • The output of all authors associated with an institution
  • The output of individual author
  • The output of all authors associated with institutions in a country of continent.

Use the tabs to the left to learn more about what you are trying to measure:

  • Read the introduction to each section first. The introduction contains general information about assessment of the item or individual, overall strengths and weaknesses for this type of assessment, and suggested uses for the data.
  • After reading the introduction, select a metric. Each metric has its own page that can be reached by clicking on the down arrow on the tab, from links within the Introduction, or by using the guide index in the left-hand column of each page.
  • The metric's page will discuss what data the metric will produce, its strengths and weaknesses (important points), and uses in addition to what is recommended on the introduction page for that section.

If you have been told to use a specific metric, look in the left-hand column (index) to locate the metric.

This guide does not attempt to list every type of metric that exists, instead, we concentrate on ones to which the ASU community has easy access or those we think might be most useful.

When using metrics, keep in mind these Rules of Thumb:

  1. Use at least two different metrics for assessment.
    Each metric has its strengths and weaknesses. Selecting metrics that balance each other reduces the possibility of inadvertent favoritism or penalization.
  2. Compare "apples to apples" not "apples to oranges."
    Do not mix scores from different metrics as each metric uses different sources to obtain data. Compare Web of Science counts to other Web of Science counts; do not compare a Web of Science count for Article A to the Google Scholar count of Article B. Additionally, do not mix scores across different subject fields as citation behavior varies considerably; a low citation count in one field may actually be considered a high count in another.
  3. Include qualitative assessment in addition to numerical metrics.
    As tempting as just using the raw data may be, the numbers must be put in context. Is the citation count due to positive or negative reasons? How does the count compare to others in the same subject field, the same journal and the same timeframe? Is the count increasing or decreasing with each successive year? Are there weaknesses in the metric that would favor or penalize the item or person under review?

Index

Introduction to:
Article Assessments
Author Assessments
Country Assessments
Journal Rankings

Metrics:
Altmetric Score

Citation Benchmarking
Citation Counts for: 
---Articles
---Authors
---Countries
Citation Distribution, see Citation Benchmarking
CiteScore 
Collaboration

Eigenfactor Score, see Other Journal Rankings
ERIH Plus, see Other Journal Rankings

Field-weighted citation impact (FWCI), see Citation Benchmarking
FWCI, see Citation Benchmarking

Google Scholar (Journal) Metrics, see Other Journal Rankings

Harzing, see Other Journal Rankings
Hirsch-index
h-index

iCite for:
---Articles, see Citation Benchmarking
---Authors

JIF
Journal Impact Factor

NIH ranking, see iCite

Publish or Perish software, see Citation Counts for Authors: Other Sources

RCR, see iCite
Relative Citation Ratio, see iCite

Scimago Country Rank (SCR)
Scimago Journal Rank, see CiteScore 
SJR, see CiteScore
SNIP, see CiteScore​
​Source Normalized Impact per Paper, see CiteScore

Usage Counts

Hours and Locations