content regulation / Copyright / defamation / filtering / hate speech / Human rights / Internet intermediaries / Law enforcement / liability / notice-and-action / notice-and-take down / terrorism

The EU approach to content regulation online: tackling (il)legal content online with upload and re-upload filters!

wisdom

Intermediary liability has been up until now a never-ending story. For the past two years, we have been regularly fed with communications, codes of conduct, legislative proposals, as well as soft but loud encouragements, from national governments targeting online platforms and asking them to do much more to tackle illegal content such as child sexual abuse material, hate speech, terrorism and extremism propaganda, copyright infringement…

Is the intermediary liability story finally coming to an end in Europe? And crucially is it a happy ending? On 28 September the European Commission finally released the last bit of its VISION: its much awaited Communication on the matter of content regulation officially entitled “Communication on Tackling Illegal Content Online – Towards an enhanced responsibility of online platforms.”

What is the Commission’s vision all about?

Online platforms are being asked to act more responsibly. This is not a novelty says the Commission; that message was already conveyed in previous communications of 2016 and 2017. The readers are warned in bold: the Commission’s vision is compatible with the EU acquis (i.e. the accumulated legislation, legal acts, and court decisions that constitute the body of EU law), in case there was any doubt!

But how can online platforms act more responsibly? The answer is through “EU-wide self-regulatory efforts to remove illegal content.” (p. 4).

For this to happen, though, could it be that the EU legal framework in relation to content regulation would need to be clarified and streamlined? Guess what? This is exactly what the Communication is meant to do: the Commission’s ambition is indeed – through this non-binding document – to explain to online platforms how the E-commerce Directive covering ‘information society services’ including ‘intermediary service providers’, and in particular its Articles 14 and 15, should be interpreted! [For more on the Directive and the terminology of platforms, see my previous posts here and here.]

Why did the Commission feel it had to step in now? Because, as the debate surrounding the proposal for a Directive on Copyright in the Digital Single Market has shown, there is no consensus on how the Court of Justice of the European Union (CJEU) case law on this topic should be interpreted and divergences still persist within and between Member States. In fact, the Commission is of the view that “a harmonised and coherent approach to removing illegal content does not exist at present in the EU”. (p. 5). Yet a coherent approach is needed, even if the Commission acknowledges that the definition of illegal content is sector-specific and can be both EU-wide or national.

In a nutshell, the Communication comprises a set of GUIDELINES: “the objective … is to set out what online platforms, competent authorities and users should do in order to detect illegal content quickly and efficiently.” [I am wondering now: given the implications in terms of fundamental rights and the very intense debate that is taking place on these issues, should the Commission be the guiding online platforms through this means?]

How does the Commission read the E-Commerce Directive? Its reading can be summed up in the following points:

  1. The Commission confirms that the E-Commerce Directive (per its Recital 40) is compatible with “the development of rapid and reliable procedures for removing and disabling access to illegal information.”
  2. The Commission reads Recital 42 of the E-Commerce Directive in the same way as the CJEU, which is not surprising, and repeats that online platforms can benefit from Article 14 (the ‘hosting’ defence to liability from third party content, which forms the basis for notice and take-down procedures by online hosts under EU law) only if they are passive providers, i.e. when their activities are “of a mere technical, automatic and passive nature, which implies that the information society service provider has neither knowledge of nor control over the information which is transmitted or stored.”
  3. Does the Commission clarify the notion of ‘general monitoring obligation’ (see my post here and here on the importance of this legal concept and its interpretation)? Well at first glance, the answer seems to be negative. The Commission starts by stating that while general monitoring obligations are prohibited, Recital 47 of the Directive makes it possible to issue monitoring obligations in a specific case.
  4. It is true that the Commission is concerned about over-removal of legal content, “which in turn impacts freedom of expression and media pluralism.” (p. 6). The ‘solution’ it gives are that “adequate safeguards should therefore be foreseen, and adapted to the specific type of illegal content concerned.” What are these safeguards? Counter-notices and measures against ‘bad’ notifiers or flaggers, says the Commission p. 17. [Really? Note that it seems that platforms are then supposed to assess whether the grounds included in counter-notices are reasonable].

So where is the value added of this Communication? Here it comes! The Commission adds:

  1. Online platforms should invest in legal expertise: they should have “the necessary resource to understand the legal frameworks in which they operate”. (p. 7). [Okay, but this is not exactly original!].
  2. Online platforms “should also cooperate closely with law enforcement and other competent authorities.” (p. 7) This idea clearly underlies Art. 15(2) of the E-commerce Directive (see my post here) but Art. 15(2) would seem to require specific national implementation. [Moreover, if we have procedures in place to allow law enforcement authorities to access data hold by platforms, shouldn’t we also have procedures in place to regulate “spontaneous” communications between platforms and law enforcement?]
  3. Online platforms are “encouraged to make use of existing networks of trusted flaggers” (p. 8). But who should be considered trusted flaggers? Those who offer “particular expertise in notifying in the presence of potentially illegal content,” e.g. the Europol’s Internet Referral Unit, the INHOPE network… What is more, the comment is made that [n]otices from trusted flaggers should be able to be fast-tracked by the platform.” [What does fast-tracked mean? Automatic responses, i.e. automatic removals, after reception of notifications without further assessment by platforms? Could rightholders be trusted flaggers?]
  4. “Online platforms should put in place effective mechanisms to facilitate the submission of notices that are sufficiently precise and adequately substantiated to enable the platforms to take a swift and informed decision about the follow-up. This should facilitate the provision of notices that contain an explanation of the reasons why the notice provider considers the content illegal and a clear indication of the location of the potentially illegal content (e.g. the URL address).” (p. 9-10). This is where things are starting to get more complicated. What is this implying? That online platforms should assess whether or not to react upon notifications and that they are also responsible for developing ‘advanced’ procedures for this purpose? [Can they ask for more than URLs? Could/Should they ask for a statement as to whether exceptions to copyright infringements have been assessed by notifiers? What about other types of infringements and possible justifications? Is this saying that in case of doubt platforms should not take down the notified content? Should we really leave the platforms to sort these procedures out by themselves? If yes, then platforms should really invest in legal expertise.]
  5. “Online platforms should, in light of their central role and capabilities and their associated responsibilities, adopt effective proactive measures to detect and remove illegal content online and not only limit themselves to reacting to notices which they receive.” (p. 10). [WOW! This is where it gets fascinating!]. The Commission “considers that taking such voluntary, proactive measures does not automatically lead to the online platform losing the benefit of the liability exemption provided for in Article 14 of the E-commerce Directive.” (p. 10). Why? First, the Commission seems to suggest that if a platform is implementing proactive measures it can still be passive. But what are these ‘proactive measures’ that can still make a provider passive? [Isn’t there a better way to express the idea by the way?] Well, measures aimed at “detect[ing] and remov[ing] illegal content online”. [What? So let’s say a platform systematically moderates content. Is the platform both proactive and passive? What if the platform systematically filters content? Is the platform both proactive and passive? What does ‘proactive measures’ really mean? Ex ante measures, i.e.  measures taking place before the content is actually transmitted to recipients of the service? Or, more simply, measures taken at the initiative of the platform (pushed by the Commission)? ‘Proactive measures’ is such a crucial term to understand the guidelines of the Commission that a definition of that term is essential! Given that upload filters most likely amount to general monitoring, the Commission can’t be asking platforms to put in place upload filters, or could it?

Reading further, the assumption that the Commission does not want to push for the adoption of upload filters is not that obvious. It expressly calls for the use of automatic detection and filtering technologies. “Online platforms should do their utmost to proactively detect, identify and remove illegal content online.” (p. 13).  As explained by Daphne Keller here, such a call is a dangerous path, strewn with pitfalls. The only way to make sense of the Communication, would be to say that the Commission, although it uses the word ‘should,’ does not see its Communication as a means to impose general monitoring obligations on platforms. At the end of the day, Art. 15 of the E-commerce Directive only prohibits general monitoring obligations and not proactive general monitoring, i.e. monitoring at the initiative of the platform (pushed by the Commission)!

Not only is the scope of Article 14 not clarified by the Communication, but it is also made more obscure! Why? Because the Commission seems to suggest that if being proactive is a way to acquire knowledge, even if a platform acquires knowledge it can still benefit from Article 14, and this is because this Article expressly states that a hosting provider is exempted from liability when upon obtaining knowledge it reacts expeditiously. This is true, but the CJEU also said (e.g. in Google v Vuitton) that to benefit from Article 14 in the first place the provider must be passive, which means the provider must not exercise control over the content nor have knowledge! So what is the Commission really saying? It may be that the real problem comes from the CJEU, as I explain here, rather than from the Commission, but what is then the value of issuing a set of recommendations of this type?

  1. Lastly, the Commission also pushes for ‘automatic re-upload filters’ known as ‘staydowns’. However, the Commission does not clarify whether a staydown should only be triggered by court orders or also by notifications. Haven’t some Member States rejected staydown procedures? The Commission seems to be convinced that automatic re-upload filters allow contextual assessment of exceptions. [Isn’t it true only when the content is illegal in all types of context! Besides, what is the difference between upload filters and automatic re-upload filters in terms of their implications for monitoring?] Interestingly, the Commission seems to suggest that, after all, what Article 13 of the proposed Copyright Directive – entitled “Use of protected content by information society service providers storing and giving access to large amounts of works and other subject-matter uploaded by their users” – does is simply to require staydown. [But, do databases of hashes in the field of copyright necessarily imply a prior assessment of the lawfulness of content?]

In the end what is the main message of the Communication?

Translated into non-legalese the message could be: “Online platforms please do as much as you can to tackle what seems to be illegal content and in particular implement upload and automatic re-upload filters! Online platforms do use content recognition technology at your own initiative as well as when you receive notifications! Don’t be scared of losing Article 14 immunity! But if you do, we are not responsible because we have never obliged you to monitor content on your platforms! We have just expressed… a wish!

But should all wishes be granted?

Sophie Stalla-Bourdillon

 

3 thoughts on “The EU approach to content regulation online: tackling (il)legal content online with upload and re-upload filters!

  1. Pingback: The EU approach to content regulation online bis: tackling (il)legal content online with real-world identities? | Peep Beep!

  2. Pingback: Law and Media Round Up – 23 October 2017 | Inforrm's Blog

  3. Pingback: Data Protection law and Copyright: Could Art. 29 WP guidance on automated decision-making “help” with filters? | Peep Beep!

Leave a Reply