Research Metrics
FEATURE
Journal articles on IOP Science (left) and
Nature.com (right) are just some of the papers now sporting alternative ways of measuring impact
stated, ‘authors are increasingly under pressure to prove the impact of their research and by aggregating these data we are helping them get the recognition they need. We are committed to providing researchers with the best publishing and reading experience and look forward to adding more metrics over the coming year.’ In many ways
journal article altmetrics
may be considered quite conservative. After all, altmetrics offer the potential to provide indicators of a far wider range of scholarly activities. It may not be the research article that is making the most impact, but the blog post where the findings are summarised, or the ideas that are shared through a Twitter account. Alternatively, the paper itself may have garnered little interest, whilst other outcomes from a research project, such as the computer code or datasets, may underpin other people’s work which is of far greater influence. Some third-party metric providers, however, have moved beyond the journal article, and allow researchers to bring together metrics for a wide range of content. For example, as well as enabling the adding of journal articles, ImpactStory also allows researchers to add web page URLs and usernames for Slideshare (a slide sharing web site), Dryad (a repository for data in the biosciences), and GitHub (a host for software development projects).
The value of altmetrics Despite the increasing accessibility of an ever- wider range of metrics, the question remains, however, as to what the raft of new metrics means, and the extent to which they are of use as more than mere curiosity. Despite it being over 50 years since Eugene Garfield popularised the application of citation analysis there continues to be disquiet amongst the research community
www.researchinformation.info
when too much emphasis is placed on citation impact. Like the commonly-quoted Gresham’s law that ‘bad money drives out good’, there is a fear that when both good and bad citations (i.e. those that occur naturally and those created purely to inflate a researcher’s impact) are valued the same, the system is vulnerable to abuse. While there are fears that traditional citations are too easily abused, the potential abuse of altmetrics seems much greater. Unlike traditional citations, it is possible to inflate altmetrics without having to get peer or editorial approval, there is no one policing mentions on Twitter or bookmarks on CiteUlike. While it may seem unlikely that researchers would spend their time artificially
‘It may not be the research article that is making the most impact, but the blog post where the findings are summarised’
inflating their own content, with the impact of much of the content being relatively low it is not beyond reason that some may attempt it. And such tasks could be easily outsourced too; services such as Amazon’s Mechanical Turk enable the creation of dozens of links and social network profiles to be created by users for little more than price of a cup of coffee.
Such problems are not insurmountable, however, and, whilst some researchers may try to game the system, new indicators will inevitably be developed that are increasingly hard to manipulate. In the same way that Google’s PageRank stopped weighting all links
to a web page equally to improve its search engine ranking, so too are indicators likely to recognise that not all tweets or mentions on a particular social media site are equal. This is already recognised with regards to tweets on ImpactStory, where the site uses data developed from
Topsy.com to categorise Twitter accounts according to influence. It should also be remembered, before people are too quick to dismiss altmetrics, that whilst commercial organisations have a financial incentive to try to exploit search engine rankings with little chance of punishment, for researchers the rewards are less and the penalties are greater; after all, reputation is of prime importance.
Standardisation
Altmetrics is still very much in the early stages of its development, and much of the focus is currently on capturing the data that is available. However, in the same way that citation-based metrics have evolved, so too will altmetrics. Of particular importance is that altmetrics are developed in the open. This is not only so that weaknesses can be highlighted and improvements suggested, but also so that standards can emerge across sites and services. Standardisation is particularly important if metrics are to be useful. To know that a paper has been bookmarked 20 times, or that a Twitter update has been retweeted 20 times has little meaning unless there are additional metrics for this to be compared against. As such, if publishers state that a paper has had a certain number of downloads it is important that they are clear about how such downloads are calculated, especially as there are conflicts of interests between those publishing the indicators and those who want to make use of the indicators.
DEC 2012/JAN 2013 Research Information 15
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36