FEATURE
Research Assessment
Current research assessment could miss the big picture
Traditional methods of research assessment could be failing those researchers who are fully embracing the possibilities of Web 2.0, argues David Stuart
W
eb 2.0 technologies have provided a host of new ways for researchers to publish, share, and discuss research in their field. The formal publication
and discussion of research that has mainly taken place within journals, conference papers, and other traditional forms of publication can now be supplemented by a host of less formal publications. Blogs provide a forum for an ongoing discussion during the research process. Wikis allow numerous contributors to work on a document at the same time. Similarly, social bookmarking and microblogging services allow the instant highlighting of documents that a researcher deems to be worthy of comment. These new means of communication potentially allow for the faster dissemination of ideas and feedback, not only among researchers within traditional research institutions, but also beyond the walls of academia where the massification of higher education has led to an increasingly highly educated workforce. If the use of such technologies is to be encouraged it is important that we have appropriate ways of measuring the impact of the new methods of publication. Attention has been described as the currency of academia, but at the moment many of the most innovative researchers, who are embracing the opportunities the new technologies offer, are being short-changed by the ways research is judged.
Estimating attention In the traditional publishing model it is possible, at least theoretically, to estimate the attention that a researcher has received according to how many times their work has been cited in comparison to other researchers in the field. Citation analysis is based on the
16 Research Information JUN/JUL 2011
idea that, when researchers publish their research, they will cite those sources that have influenced their work. By counting up the citations we can determine the impact of the different contributions. As such, citation- based metrics have been used to inform decisions on hiring researchers, offering tenure, and allocating research funding. In reality, however, estimating the attention that a work has received is not so simple. Citation indexes only include a limited number of publications and a researcher’s contribution to science cannot easily be reduced to a single metric. In
that comes from think tanks, commercial research reports, and government organisations. And although Google Scholar incorporates
a far greater variety of documents than has traditionally been included in citation indexes, this has come at the price of lower reliability in the results. The automatic collection of data from across the web means that there will necessarily be mistakes in the indexing, and as such the traditional, primarily journal-focused, citation indexes continue to be considered the indexes of authority and are those predominantly used
When, from the perspective of research assessment, there is no ostensible difference
between a highly-cited blog with tens of thousands of hits a month and one that no one visits, there is little incentive to create a vibrant, worthwhile blog
addition, the publication process means that such metrics are necessarily slow and the focus on a single metric increases the chance that it will be open to abuse. That is not to say that citations cannot provide useful insights, merely that these insights are limited. In contrast, Web 2.0 technologies provide
the opportunity for research assessment to be based on a far larger corpus of documents, allow a far wider range of research activity to be captured more quickly, and are potentially more difficult to abuse. It has always been the case that a significant proportion of publications have been excluded from citation indexes. Whole formats have either been ignored or vastly underrepresented, from monographs and conference proceedings to the grey literature
in research assessment. However, while a more inclusive citation index of traditional publications would be more useful, it would nevertheless fail to take into consideration the recent changes in the publishing process.
Lack of innovation incentive Today researchers are encouraged to publish research in new and innovative ways, engaging in online conversations with other researchers as well as the public through blogs, videos, wikis, and the opening up of datasets. However, unless metrics are established that take into account these new technologies, and are incorporated into research assessment, more often than not the technologies will only be used half-heartedly, or not at all.
www.researchinformation.info
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28