Citation Research and Impact Metrics
Methods and metrics for evaluating scholarly and research impact.
How to Use This Guide
Citation research, impact metrics, and research analytics are ways to attempt to assess the performance or impact of research, by analyzing how published items are cited in publications, or other forms of usage metrics (sometimes called alternative metrics) such as number of tweets. Citations can be analyzed at a variety of levels:
- Individual journals
- Individual articles
- The output of an individual author
- The output of all authors associated with an institution
This guide does not attempt to list every type of metric that exists, instead, we concentrate on ones to which the ASU community has easy access or those we think might be most useful.
When using metrics, keep in mind these Rules of Thumb:
- Use at least two different metrics for assessment.
Each metric has its strengths and weaknesses. Selecting metrics that balance each other reduces the possibility of inadvertent favoritism or penalization. - Compare "apples to apples" not "apples to oranges."
Do not mix scores from different metrics as each metric uses different sources to obtain data. Compare Web of Science counts to other Web of Science counts; do not compare a Web of Science count for Article A to the Google Scholar count of Article B. Additionally, do not mix scores across different subject fields as citation behavior varies considerably; a low citation count in one field may actually be considered a high count in another. - Include qualitative assessment in addition to numerical metrics.
As tempting as just using the raw data may be, the numbers must be put in context. Is the citation count due to positive or negative reasons? How does the count compare to others in the same subject field, the same journal and the same timeframe? Is the count increasing or decreasing with each successive year? Are there weaknesses in the metric that would favor or penalize the item or person under review?
Critical Resource
- The Metrics ToolkitAn excellent and comprehensive resource for researchers and evaluators that provides guidance for demonstrating and evaluating claims of research impact. With the Toolkit you can quickly understand what a metric means, how it is calculated, and if it’s good match for your impact question. Highly recommended!
Additional Resources
Measuring Research by Cassidy R. Sugimoto; Vincent Larivière
Measuring Research: What Everyone Needs to Know will provide an accessible account of the methods used to gather and analyze data on research output and impact. Following a brief history of scholarly communication and its measurement -- from traditional peer review to crowdsourced review on the social web -- the book will look at the classification of knowledge and academic disciplines, the differences between citations and references, the role of peer review, national research evaluation exercises, the tools used to measure research, the many different types of measurement indicators, and how to measure interdisciplinarity. The book also addresses emerging issues within scholarly communication and will also look at the stakeholders behind these analytical tools, the adverse effects of these quantifications, and the future of research measurement.
- Measuring Impact: ACRL Scholarly Communication ToolkitA comprehensive overview of scholarly impact metrics and related resources, by the Association for College and Research Libraries.
- Harzing's Publish or PerishPublish or Perish is a software program that retrieves and analyzes academic citations. It uses a variety of data sources to obtain the raw citations, then analyzes these and presents the metrics, such as: Total number of papers and total number of citations, Average citations per paper, citations per author, papers per author, and citations per year; Hirsch's h-index and related parameters; Egghe's g-index; The contemporary h-index.
- San Francisco Declaration on Research Assessment (DORA)There is a pressing need to improve the ways in which the output of scientific research is evaluated by funding agencies, academic institutions, and other parties.To address this issue, a group of editors and publishers of scholarly journals met during the Annual Meeting of The American Society for Cell Biology (ASCB) in San Francisco, CA, on December 16, 2012. The group developed a set of recommendations, referred to as the San Francisco Declaration on Research Assessment.
- Bibliometrics: The Leiden Manifesto for research metricsData are increasingly used to govern science. Research evaluations that were once bespoke and performed by peers are now routine and reliant on metrics. The problem is that evaluation is now led by the data rather than by judgement. Metrics have proliferated: usually well intentioned, not always well informed, often ill applied. We risk damaging the system with the very tools designed to improve it, as evaluation is increasingly implemented by organizations without knowledge of, or advice on, good practice and interpretation.
- HuMetricsHSSHuMetricsHSS is an initiative for rethinking humane indicators of excellence in academia, focused particularly on the humanities and social sciences (HSS). Comprised of individuals and organizations from the academic, commercial, and non-profit sectors, HuMetricsHSS endeavors to create and support a values-based framework for understanding and evaluating all aspects of the scholarly life well-lived and for promoting the nurturing of these values in scholarly practice.