This page contains a Flash digital edition of a book.
Metrics FEATURE


Publishers are increasingly publishing altmetrics alongside usage figures, and a number of tools now exist for measuring the impact of work across multiple journals and multiple sites: Altmetric, ImpactStory, and Plum Analytics.


There are also new research outcomes that offer the potential of new publication metrics. Datasets are increasingly packaged as distinct research outcomes. They are not only being deposited in data archives but also being associated with new data-centric publications such as Scientific Data from the Nature Publishing Group and Wiley’s Geoscience Data Journal. The computer code that has been used to collect and model this data is also increasingly made publicly available for reuse and development.


These new types of publication require new metrics. It is important to know not only how many times has a data publication been cited but how integrated the dataset is into the semantic web. Similarly, it’s not just about how many times has the computer code been reused, but how many times has it been independently developed and these new versions used.


The potential wealth of metrics still requires a lot of work, through standardisation in the way that data is collected and reported from individual sites, and in the way the data is aggregated from multiple sites. For example, how do we combine the impact of multiple versions of the same document and how do we aggregate the impact of a single version on multiple social networks? These metrics offer the potential for increased scientometric services within information services. However, these services may not always have a human face. Information professionals may have an increased role in helping researchers to demonstrate the impact of their work but the filtering and pushing of content to users is likely to become increasingly automated.


Filtering and credit Traditionally, citation analysis has primarily been used for evaluating the impact of research rather than as a tool for the filtering of articles. Although a library may have used journal impact factors to identify the key journals within a field, it would nonetheless be expected that researchers carry out a comprehensive search of the literature related to their research. But, as the amount of content created continues to gather pace, automatic filtering becomes increasingly important if researchers are to keep at least a passing understanding of some of the important issues in their field rather than the increasingly small part that they are investigating. The role of such filtering has barely begun; it seems inevitable that it will eventually move


www.researchinformation.info @researchinfo


beyond specialised services such as Altmetric and be incorporated in more open and accessible services such as Google Scholar. These products are likely to require active rather than passive engagement if the subtleties in information behaviour and practices between the different fields are going to feed into the filtering process. But if information is increasingly filtered and pushed to researchers, there will inevitably be less of a role for the information professional. Publication metrics will nonetheless continue to have an important role in the assignment of credit, albeit of a more nuanced type than before. New metrics can potentially demonstrate impact in communities that would not have been


‘If information is increasingly filtered and pushed to researchers, there will inevitably be less of a role for the information professional’


represented in traditional scientific discourse (for example, the public) and demonstrate the value of products that would not have been captured previously (such as data collections). If filtering offers a challenge to the traditional role of the library and information professional, then the issue of credit provides a more obvious opportunity. As demonstrating impact becomes more important, and the landscape of publication metrics more complex, the traditional bibliometric competences will undoubtedly become more important.


A cautionary future


There seems to be an inevitability to publication metrics having an increasingly important role within information services, both for filtering and the attribution of credit. Nonetheless it is important that caution is taken when applying the metrics. It will always be important to look beyond the filters, and credit can never be reduced to quantitative indicators alone. Part of this is attributable to what has been termed Goodhart’s Law: ‘When a measure becomes a target, it ceases to be a good measure.’


When publication metrics such as the number


of Twitter mentions or downloads become part of the way people find research, or how credit is assigned, then there is an inevitability that some people will try to abuse the system and achieve greater impact than the research deserves. Part of the problem in identifying potential


abuse of the system is the cultural norms of what is and isn’t acceptable scientific practice, and this is changing along with the publishing landscape. Whereas once academic search engine optimisation may have seemed inconceivable, maximising the impact and visibility of research is now a recognised part of the publishing process, and a service such as Kudos appeals partly through the promise of levelling the playing field. There will always be practices that are frowned upon by the scientific establishment, but as the scientific community sees itself more and more as a marketplace, practices that are deemed disreputable will be become fewer and fewer. Even if the publication metrics are not manipulated, metrics can nonetheless only ever tell part of the story about the value of research to different users. The idea of reducing impact to a single metric, whether this is an H-Index or journal impact factor, is understandably appealing as it allows for the simplistic comparison of similar aggregations. Nevertheless it is also fundamentally flawed as people and their work are multi-faceted and so is their impact. That a new battery of metrics is now available is undoubtedly an improvement on existing limited measures of impact, but any metrics are nevertheless a simplification of actual impact.


All publication metrics are limited, but it’s through understanding their limitations that some of them can be useful. The wide variety of publication metrics that are now available undoubtedly have a role to play in the future of information services. However, it is important that their limitations are recognised if they are not to be abused in the way that citation metrics have been. This is a role for the library and information professional. Many will be sceptical about the value of the new metrics, both for the filtering of content and the assigning of credit, and such scepticism is essential in making sure the most appropriate metrics are identified and that they are only used where applicable.


David Stuart is a research fellow at the Centre for e-Research, King’s College London, and an honorary research fellow in the Statistical Cybermetrics Research Group, University of Wolverhampton


FURTHER INFORMATION


COUNTER www.projectcounter.org Altmetric www.altmetric.com ImpactStory impactstory.org Plum Analytics www.plumanalytics.com Scientific Datawww.nature.com/scientificdata


Geoscience Data Journal onlinelibrary.wiley.com/ journal/10.1002/(ISSN)2049-6060 Kudos www.growkudos.com


JUNE/JULY 2014 Research Information 17


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33