search.noResults

search.searching

note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Sponsored content


Product Spotlight


gaiming to be identified as a journal that has been accepted in the Web of Science Core collection for the sciences and social sciences. Hardcastle: As long as journals exist, then metrics such as the Journal Impact Factor will have a place for evaluating them. As article level metrics become cheaper and easier to use, the Journal Impact Factor will be less frequently used as a tool for research assessment and once again return to its original purpose for collection management and journal evaluation.


Setting your publishing strategy


What if you could take the guesswork out of setting your publishing strategy?


The research landscape is constantly changing; every year there are almost 1,000 new scholarly journals published. Publishers are trying to keep pace by keeping their content relevant and distinct. Setting the right strategy is critical for the future of your organization, but it can be difficult to get the straightforward insights needed to understand the research landscape. In order to develop a publishing strategy that significantly differentiates your journals in the market, you need to have the right data. With analytics from Web of Science,


you can make more informed publishing decisions.


See the full research landscape picture: With the most complete, consistent, impartial dataset on the research landscape, you see where you – and your peers – stand in it.


Plan for the future: Find funding trends and follow hot topics in research to find out what areas may need more journal coverage, and identify the best researchers in your field


Explore new models: Find out how trends in open access journals impact the landscape and authors publishing in your titles and subject categories.


As the world of research becomes more complex, publishers will need to set the right strategy to keep up. With analytics from Web of Science, you can access the unique insights in order to make the most informed decisions and take the guesswork out of setting your publishing strategy.


For more information Contact Timothy Otto: timothy.otto@clarivate.com


Research Information noted a couple of years back that we are moving towards measures that are more difficult to count. Do you agree with this, and if so what are the implications? Roelandse: Yes, I think this is correct. With the myriad of sources that are added to the altmetrics portfolio, it has become challenging to assess the weight of the various sources. If your content is cited in a policy document, or a member of parliament tweets about your article, one could assume your work has made a certain impact on society. However, how to assess this high level of granularity is to be determined. At the same time, it would be good to note that especially for researchers, the highest score counts, whether the data source itself is open, reliable and deduplicates or not. It is remarkable to see that sources that lack these last qualifying criteria have become the golden standard for citations. Thiveaud: As Ludo Waltman at CWTS in Leiden has advised (‘A review of the literature on citation impact indicators’, Journal of Informetrics, 10 [2]: 365-391, May 2016), there is little need for more metrics or more complicated metrics. Many are duplicative with existing indicators. Many are over-engineered and tempt users into a fallacy of false precision. Other formulae and algorithms are several steps removed from the data and subject to very debatable assumptions. Composite indicators are particularly dubious since weightings of different indicators in a group have no scientific basis and their modifications yield very different portraits of performance. Hardcastle: There is a rise in black-box metrics that are can’t be replicated easily such as EigenFactors and Altmetric Attention Scores, which make it very difficult to work out the effect of a single citation or piece of attention on the final metric. As science outputs gets more complex and the requirements of funders and institutions changes the metrics they choose will reflect the diversity of need.


James Hardcastle, Taylor & Francis


“The predominant trend in metrics has been the


increasingly diverse nature of what can be measured”


Could publishing metrics be simplified further? Would that help the community? Or do measures actually need to be more complicated? Roelandse: We have the tendency to provide researchers with a bag of metrics and add more scores on a regular basis. I doubt this is of use to most of the researchers and may even cause confusion. Which one is relevant, which one isn’t? This is where we could play a role as publishers. Thiveaud: Good practice dictates that one should use several measures that address different aspects or dimensions of the phenomenon being measured, that the indicators should be specific to the questions being asked (which is why there is not a standard or cookbook methodology), and that the measures should not be redundant (highly correlated to other indicators being used that measures the same thing). Generally, only a few, fairly simple indicators are adequate, usually a mix of size-dependent and size- independent indicators. Hardcastle: There are two conflicting demands. Publishers, authors, institutions and other metric consumers need metrics that reflect the different types of research, subject differences and the increasing range of outputs. They also want simple metrics that are easy to understand and use. The mathematical complexity of a metric isn’t a problem as such, instead it needs to be easy to understand the underlying drives for a change in the metric and what is being measured. Ultimately it isn’t necessarily the


complexity or the number of metrics that causes problems, but using the wrong metric in the wrong place. The publishing and metric industry has not been good at articulating how different metrics should and shouldn’t be used.


@researchinfo | www.researchinformation.info


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44