When Eugene Garfield first proposed what became the Science Citation Index (which evolved into the current Web of Science Core Collection database), the purpose was to determine how an article had influenced future research by looking at what recent publications referenced the older article. Counting the number of citations and using the number for assessment, while never the intention, none the less proved too tempting to resist.
In the original Science Citation Index, the focus was on the science literature as these fields primarily publish in journals and frequently cite the previously published literature. Comparing citation counts within this environment at first seemed to work, but as the Index expanded to include journals from other fields in engineering, the social sciences and humanities, the variable publication and citation behavior of these additional fields created a problem. What was a typical citation count in one area was not necessarily typical in others, consequently comparing raw citation counts across disparate fields became a comparison of apples to oranges.
In recent years, metrics are measuring usage and mentions in non-traditional sources such as social media in addition to the traditional citation counting. Additionally, metrics have been developed to provide a more apples to apples comparison by putting a numerical measure into context within a field or journal or some other type of benchmark. Regardless of the metric, each has its weaknesses and needs to be combined with qualitative assessment; an article may be cited, read, or mentioned for negative reasons as well as positive ones. Why it is getting noticed and who is noticing it needs to be considered along with the citation count.