- Use the tabs above to select what you are trying to access, an article, an author, a country or a journal.
- Read the introduction to the section first. The introduction contains general information about assessment of the item or individual, overall strengths and weaknesses for this type of assessment, and suggested uses for the data.
- After reading the introduction, select a metric. Each metric has its own page that can be reached by clicking on the "down arrow" on the tab, from links within the Introduction, or by using the guide index in the left-hand column of each page.
- The metric's page will discuss what data the metric will produce, its strength and weaknesses (important points), and uses in addition to what is recommended on the introduction page for that section.
If you have been told to use a specific metric, look in the left-hand column (index) to locate the metric.
This guide does not attempt to list every type of metric that exists, instead, we concentrate on ones to which the ASU faculty have easy access or those we think might be most useful.
When using metrics, keep in mind these "Rules of Thumb":
- Use at least two different metrics for assessment.
Each metric has its strengths and weaknesses. Selecting metrics that balance each other reduces the possibility of inadvertent favoritism or penalization.
- Compare "apples to apples" not "apples to oranges."
Do not mix scores from different metrics as each metric uses different sources to obtain data. Compare Web of Science counts to other Web of Science counts; do not compare a Web of Science count for Article A to the Google Scholar count of Article B. Additionally, do not mix scores across different subject fields as citation behavior varies considerably; a low citation count in one field may actually be considered a high count in another.
- Include qualitative assessment in addition to numerical metrics.
As tempting as just using the raw data may be, the numbers must be put in context. Is the citation count due to positive or negative reasons? How does the count compare to others in the same subject field, the same journal and the same timeframe? Is the count increasing or decreasing with each successive year? Are there weaknesses in the metric that would favor or penalize the item or person under review?