Analysis
Data challenges: teams, tools and law-changes Dealing with data is nothing new to scholarly publishers – but it was clear from a recent ALPSP event that it’s an ever-changing battlefield, reports Warren Clark
How to Build a Data Driven Publishing Organisation, held on 20 April at the Institute for Strategic Studies in London, and hosted by ALPSP, proved there is still much to learn for many people in how to approach the masses of data points generated by companies throughout the publishing cycle. As John Morton, board chair of
Zapaygo, said in his keynote: ‘Most publishers are using less than five per cent of the data they own.’ The event featured many examples of
areas in which data could be collected, analysed and presented in a form that would improve profitability for publishers, and provide users with a more personalised experience. Ove Kähler, director, program management and global distribution at Brill, together with his colleague Lauren Danahy, team leader for applications and data, explored the challenges they faced in developing an in-house data team. Their most significant innovation was to arrange their primary data groups according to where they occurred in the workflow: content validation; product creation; content and data enrichment; content and data distribution; product promotion; and product sales. The pair explained how they created
a team – from existing staff within the company – giving each specific responsibility for one of those data groups, and how that led to improved quality and output of data at each step. The notion that publishers shouldn’t assume that dealing with data means employing new staff was echoed throughout the day, with both David Smith, head of product solutions at IET, and Elisabeth Ling, SVP of analytics at Elsevier, suggesting in the panel discussion that people ‘look at your own team first’, since it was likely that the skills required would already be present.
Choosing tools Many speakers talked about how they capture, store, analyse and visualise the
www.researchinformation.info | @researchinfo
data they collect. The most extensive of these was IET’s Smith, who overhauled the IT department’s software tools to evolve a more accurate suite of visualisations that product teams could use independently and without a need for continuous IT support. Smith explained that those looking for a ‘single solution’ from a software package that solved all data challenges would be disappointed, before reeling off half a dozen or more software tools that his team had integrated to develop a solution that suited their needs. In a session that brought a perspective
For Ian Craig, director strategic market analysis at Wiley, data was used to help business decisions on journal launches. He explained a major project that involved them collecting internal and external data points, such as subject matter, number of submissions, journal usage, funding patterns, and many more. The outcomes have helped improve existing journals, and suggest where future resources should be deployed for emerging markets. Similarly, Blair Granville, insights analyst
at Portland Press, demonstrated how his team tracked submissions, subscriptions, open access, citations, usage, commissions and click-through rates in order to feed intelligence back to editorial teams about where their focus should be.
“The UK, and any country outside the EU... will have to comply”
from outside the publishing industry, Matt Hutchison, director of business intelligence and analytics at Collinson Group, a company that runs global loyalty programmes on behalf of major brands, supported this notion by showing how they had outsourced some of their function to Amazon Web Services (AWS).
What data can bring Another theme was quality of data, as Graeme Doswell, head of global circulation at Sage Publishing put it: ‘You need your data capture processes to be as granular as you want your output to be.’ He showed examples of how Sage uses data to show librarians their levels of usage, making it easier for sales teams when it came to renewals. David Leeming, publishing consultant at 67 Bricks, gave a further example in the area of content enrichment.
Data and the law The most enlightening paper of the day came from Sarah Day, data marketing professional and associate consultant at DQM-GRC, who spoke about data regulation and governance. She warned against complacency and ignorance, particularly with regard to the upcoming General Data Protection Regulation (GDPR). Already law, but due to become enforceable in May 2018 (allowing time for institutions to ensure compliance), this is an EU-wide revision of privacy laws designed to give individuals more control over their personal data. ‘In spite of Brexit, the UK – and indeed
any country outside the EU that offers goods and services to people in the EU – will have to comply,’ said Day. The impact of the regulations are many, and among the most important things publishers can do is ‘be transparent about what you are doing with an individual’s data’. Although Day successfully rose to the challenge of explaining GDPR in one minute, it served to demonstrate that managing data in a safe, secure, and legal manner is a complex issue that every publisher will have to address head on. With more than 50 attendees at the
event, it’s clear that understanding data – and the issues that come with it – is an issue that will only become more important, as the amount of data generated grows exponentially.
June//July 2017 Research Information 21
Maksim Kabakou /
Shutterstock.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44