JULY 2014
publicity. Moreover, it’s likely to deter companies from starting unnecessary libel actions. But the scope of the law goes far beyond corporate reputation – it goes to the heart of individual reputations and perceived personal image. And as the global online conversation gets louder, broader and much more visible, everyone is a potential claimant.
Informed comments The new legislation has a strong relevance to online media that encourages or permits comment and debate. In recent years, some high profile sites have come under fire for failing to manage posts that are considered by some to be defamatory. The UK Defamation Act brings this activity into sharp focus.
One of the Act’s most significant components is the introduction of new measures designed to help alleged victims of online defamation resolve their dispute directly with the individual who has posted the potentially defamatory statement – rather than with the site that published the offensive post. This clearly has positive implications for online publishers.
Furthermore, the Act also means that website operators no longer need to pre-moderate user comments. It introduces a Section 5 defence, giving publishers 48 hours to remove potentially defamatory comments upon receipt of a written complaint. This ‘report and remove’ policy provides welcome protection for web operators, enabling them to handle complaints quickly and painlessly, as well as giving them the opportunity to manage correspondence in-house rather than incurring unnecessary legal fees. But to benefit from the Section 5 defence, companies must first establish clear processes to enable the efficient handling of libel complaints, and build appropriate infrastructure to allow them to invoke it (See Figure 1: At-a-glance: the Defamation Act 2014).
The legislation is, on the face of it, good news for online businesses, making it harder for individuals to sue for defamation. But, despite the positive regulatory developments, there remain many hidden pitfalls. In an era of heightened social media engagement and rapidly escalating online interaction, website operators cannot afford to be complacent.
Running commentary Online publishers, and indeed consumer brands, are increasingly using commenting platforms to build interactive user communities and encourage brand engagement. But as users’ readiness to comment increases and consumers become more engaged, the risk of
defamatory comments slipping through the net similarly escalates.
Publishers’ growing dependence on commenting platforms does, of course, make sound business sense. In the fiercely competitive online marketplace, customer engagement and site ‘stick-ability’ are naturally regarded as key brand imperatives – and users’ ability to contribute to online discussions and drive the debate is seen as a central component in helping deliver them. But the renewed emphasis on commenting goes beyond interactivity; it has commercial and strategic implications that go right to the heart of traditional publishing models.
As publishers slowly migrate from paper to pixel – from print to digital – the operational demands of online media are, in tandem with the steady decline in print advertising, placing exponential pressure on the P&L. To compete in the digital marketplace, publishers commonly need to serve high-frequency, high-volume content – but this is offset against the ongoing requirement to optimise resources and minimise editorial costs. As such, driving user-generated content (UGC) has become a major strategic objective and, in the process, has sparked a surge in publisher deployment of commenting platforms.
Commenting platforms can indeed play a major role in helping publishers manage resources, optimise processes and satisfy burgeoning content demands. The smartest platforms allow marketers and publishers to offer visitors opportunities to create and amplify content across all of a brand’s digital touch points. With one simple click, users can comment on and respond to site content – and they can also share that content across social networks, spark real- time conversations and see activity streams that automatically appear across a brand’s mobile and desktop experiences.
From a user-perspective, commenting thereby becomes a straightforward process, with many of the traditional barriers to posting content removed or simplified. Likewise, for publishers, UGC platforms can help them deliver their key strategic goals – allowing them to generate original content simply, efficiently and cost- effectively. The challenge, however, is to ensure that they do it safely and compliantly.
More advanced commenting platforms do have the in-built capability to screen content and flag profanities or potentially offensive language – but on its own, such automated functionality is not enough. Often, casual and seemingly inoffensive words can, in context, be open to much more damaging interpretation. But, using
Janrain
automated systems alone, these potentially offensive statements can easily go under the radar and end up being published – risking brand reputation and, at the very least, causing great operational inconvenience.
In light of the new defamation legislation, online publishers need to do all that they can to ensure that their systems are configured to identify, alert and manage problem posts, and to mitigate the risk of online defamation. So the question for publishers is simple: how do you create the optimal environment to encourage user comments, whilst at the same time reducing the risk of irresponsible or libelous posts?
Perhaps the most effective solutions will encompass social login – which can support both objectives in equal measure. Certainly, the ability to identify and authenticate users will be crucial for online publishers in the new legislative environment – and this is a clear strength of social login solutions.
The Sense of Identity The Defamation Act introduces guidelines that allow alert website operators to clear themselves of responsibility for errant comments, and instead pass that responsibility over to the person that posted the offensive remarks.
However, this places greater onus on site owners to be able to identify individual users. As a result, websites with messaging boards are increasingly being advised to register users before they are able to post. This makes good sense. Registration provides a platform to establish terms and conditions and inform users that their details may be divulged if they post defamatory comments – this alone can often prove to be a key deterrent. But for online publishers, the challenges at registration are numerous.
Challenges
The fake ID: lie-ability and liability Naturally, registration should comprise full contact details including an authentic email address – but user IDs are relatively easy to fake. There are a variety of methodologies to authenticate email accounts – but not all of them are infallible; a legitimate email address does not always guarantee a legitimate identity. This in itself can leave publishers exposed in the event of a defamation claim.
The hidden identity In addition to the fake identity, there is also the potential threat of the ‘hidden identity’; comments posted anonymously can leave online publishers vulnerable to untraceable defamation. Whilst major brands like Facebook
www.lawyer-monthly.com www.lawyer-monthly.com
23