Your own or your organisation’s sites: Content moderation

Introduction

This third post in the series entitled “Your own or your organisation’s sites” addresses the issue of content moderation. While that issue is related to content creation, it is considered sufficiently significant to treat as a subject in its own right.

Moderation decisions

Preferably before the site is launched, a decision needs to be made as to whether to moderate user contributions and, if so, how. There are three potential approaches:

  • not to moderate at all;
  • moderate before user-generated content goes live on the site; or
  • moderate after user-generated content goes live on the site (either across the board or by way of spot-checking).

This decision can be usefully informed by considering the nature of the site, those likely to use the site, the risk of hostile or abusive comment or an influx of spam, potential commercial or political ramifications and whether there are any particular categories of users that may need protection (e.g., children).

The potential paradox of no moderation

While the law in this area is currently uncertain, those making this decision may wish to note that there is an argument that user-generated content that is not moderated prior to publication may expose the hosting agency to less risk of personal liability in the event of the appearance of objectionable comment, at least where the cause of action is defamation, but subject to the important qualification that such content must be removed as soon as possible upon being informed of its nature to avoid liability.

In very simple terms, this position is explicable on the basis that, without pre-publication moderation, the agency is not aware of and therefore may not, in the eyes of the law, be complicit in its publication. In technical terms, where the cause for complaint is the publication of defamatory comment, the so-called “innocent dissemination” defence may be available (as to which, see section 21 of the Defamation Act 1992, but note there is no New Zealand case law on the availability of this defence in the internet context).

While similar arguments might, in certain circumstances, be available where the cause of action is breach of copyright, it is important to note that New Zealand copyright law is currently particularly uncertain in its application to innocent internet service providers and others in a similar position, as to which see, for example, MED’s Digital Technology and the Copyright Act 1994: A Discussion Paper (July 2001) and J Bayer’s “Liability of Internet Service Providers for Third Party Content” (2007) VUWLR Working Paper Series, Volume 1 (available from Victoria University’s Faculty of Law). The position will be clarified, generally in ISPs’ favour, upon enactment of the Copyright (New Technologies and Performers’ Rights) Amendment Bill, which has a specific series of provisions on ISP liability.

A tough decision?

Even if there is some temporary comfort to be obtained from not moderating in advance, one needs to weigh that up against the potential risks involved in not pre-vetting comment posted to the site as, depending on the context, reputational and security issues may arise, not to mention the risk of being sued notwithstanding the possible availability of defences. These risks, coupled with a significant measure of legal uncertainty in this area, may suggest that pre-publication moderation is preferable, at least for organisations that are risk-averse. The consequent cost, where pre-publication moderation is preferred, is loss of spontaneous two-way interaction (which may simply send some users packing) and the provision of human resources to undertake the moderation.

If pre-publication moderation is not the preferred approach, then the site-owning organisation will wish to ensure that it has comprehensive terms of use on the site and that it has resources in place to remove offending content as soon as possible upon becoming aware of its offending nature.

A note for the public sector

Public sector agencies may note that moderated content is potentially subject to requests under the Official Information Act 1982. The media, for example, might make a request seeking access to all online comment, including moderated comment that was allowed to go live and that which was not, with a view to comparing the two and opining on, for example, the organisation’s approach to freedom of expression. The practical implication should be obvious.

2 Comments

  1. One other possible system to weigh is a variation of the after the fact moderation that allows the users to also moderate the content itself. Look at sites such as Digg and Reddit, you’ll find that though there is some comment moderation from the company itself, most is done by voting from users. It works well on sites with large numbers of both users and comments.

    There are risk/rewards with the system as well, but it is another approach to think about.

    Just a thought…

  2. Richard (Author)

    Thanks Jonathan. I entirely agree that this is another option. Come to think of it, user-driven moderation/voting could usefully supplement either pre-publication moderation by the site owner or post-publication moderation. In the former case, this supplement could help mop up objectionable content which, for whatever reason, the moderator lets slip through. Thanks again.

Leave a Reply

If you'd like to get to know us or discuss something over coffee, drop us a line.