The European Commission (EC) Recommendation of 1.3.2018 on measures to effectively tackle illegal content online (C(2018) 1177 final) is now freely accessible. As readers know, this is not the first time the EC has attempted to express meaningful thoughts (for lack of a better word, as once again the instrument is not strictly speaking binding on anyone, although the EC will monitor the effects of this Recommendation as per Chapter IV) on the topic. On 28 September 2017, for instance, the EC had released its Communication outlining guidance on measures how to tackle illegal content online. (see my earlier post on the Communication here).
So what? Don’t we already know everything that we need to know about Articles 12 to 15 of the E-commerce Directive (ECD) ? Didn’t Andrus Ansip, the Vice-President of the European Commission, tweet a few days ago that “EU’s limited liability system in #eCommerce should remain backbone of an open, fair and neutral internet.” [By the way, his speech is available here]?
Well, the answer is NO! There is always something “new” to say about Internet intermediaries, it seems!
Notably, the Recommendation specifically targets and addresses hosting providers more than any other intermediary provider.
3 points in particular are worth highlighting:
- One of the main problems with this Recommendation is its over-reliance upon private ordering and private enforcement. Fast-track procedures for trusted flaggers, for example, should be put in place in order to process notices as per para. 25. However, trusted flagged are only considered trusted because hosting service providers are of the view that they are trustworthy as per the definition inserted at para. 4(g): “’trusted flagger’ means an individual or entity which is considered by a hosting service provider to have particular expertise and responsibilities for the purposes of tackling illegal content online.”
In the same line, there seems to be an assumption that notice providers are generally trustworthy. Identification of the notice provider should not be a requirement of notice-and-action procedures (see chapter II), although para. 21 demonstrates concerns about abusive behaviour. [By the way, does para. 21 mean that it is for the hosting provider to decide which measure to apply “to prevent the submission of, or the taking of action upon, notices or counter-notices that are submitted in bad faith and other forms of abusive behaviour”].
Going further, Member States are encouraged to implement Article 15(2) of the ECD and thereby to impose information obligations upon hosting providers.
- Despite the conflation between all types of illegal activities and in particular “certain information relating to terrorism, child sexual abuse, illegal hate speech or infringements of consumer protection laws” as well as copyright infringement, the Recommendation attempts to draw a distinction between two types of illegal activities: “terrorist content” and the rest.
The Recommendation supports the implementation of two types of measures: “notice-and-action mechanisms” and “proactive measures.” No definition of proactive measures is given, although notices are defined at para. 4.
Recital 24, however, gives a hint as to what proactive measures could be:
“In addition to notice-and-action mechanisms, proportionate and specific proactive measures taken voluntarily by hosting service providers, including by using automated means in certain cases, can also be an important element in tackling illegal content online, without prejudice to Article 15(1) of Directive 2000/31/EC. In connection to such proactive measures, account should be taken of the situation of hosting service providers which, because of their size or the scale on which they operate, have only limited resources and expertise and of the need for effective and appropriate safeguards accompanying such measures.”
The EC therefore seems to suggest that not all proactive measures will necessarily be suitable for all hosting providers and all types of content. This is what the juxtaposition of para. 18 and paras. 36-37 also leads to.
Para 18 states that:
“Hosting service providers should be encouraged to take, where appropriate, proportionate and specific proactive measures in respect of illegal content. Such proactive measures could involve the use of automated means for the detection of illegal content only where appropriate and proportionate and subject to effective and appropriate safeguards, in particular the safeguards referred to in points 19 and 20.”
Paras 36-37, which specifically target terrorist content, go further:
- Hosting service providers should take proportionate and specific proactive measures, including by using automated means, in order to detect, identify and expeditiously remove or disable access to terrorist content.
- Hosting service providers should take proportionate and specific proactive measures, including by using automated means, in order to immediately prevent content providers from re-submitting content which has already been removed or to which access has already been disabled because it is considered to be terrorist content.
Could this mean that stay-down obligations are excluded as a matter of principle (i.e. including for copyright infringement)?
Could this also mean that as a matter of principle (i.e. including for copyright infringement) automated means should only be used for detection purposes and should not be followed by automatic removal?
This was not the position of the EC in its September Communication, was it? [At the same time, one cannot really say that the EC expressed a clear position in its Communication].
Recital 25 of the Recommendation acknowledges the importance of context, and states that: “It can, in particular, be appropriate to take such proactive measures where the illegal character of the content has already been established or where the type of content is such that contextualisation is not essential.”
However, [and without wanting to ruin the party] the fact that the EC seems to assume that context is irrelevant for terrorist content slightly undermines the distinction it seems to be drawing between terrorist content and other types of illegal content. The latter is exacerbated by the use of the expression “terrorist content” [which is misleading].
Still, in the context of copyright infringement, recommending that proactive measures are used for detection purposes only … isn’t it already something? Could it mean that Article 13 of the proposal for a Directive on copyright in the Digital Single Market is not a workable option? Is it really the case that the Recommendation leaves the proposal for a Directive on copyright in the Digital Single Market unaffected as per Recital 7? Or is para. 18 of the Recommendation simply clarifying what Article 13 was intended to achieve, i.e. no automatic removal? [This is what one could find in the first version of Article 13(1) : “Information society service providers that store and provide to the public access to large amounts of works or other subject-matter uploaded by their users shall, in cooperation with rightholders, take measures to ensure the functioning of agreements concluded with rightholders for the use of their works or other subject-matter or to prevent the availability on their services of works or other subject-matter identified by rightholders through the cooperation with the service providers. Those measures, such as the use of effective content recognition technologies, shall be appropriate and proportionate.”]
Recital 27 of the Recommendation makes the point that “In addition to any safeguards laid down in the applicable law, for instance regarding the protection of personal data, particular safeguards, notably human oversight and verifications, should be provided for and applied where appropriate in relation to the use of automated means, so as to avoid any unintended and erroneous decisions.”
But, if both automated means and human oversight should be present, doesn’t it mean that only a very [very] limited number of hosting providers would be able to cope with the costs of such measures and therefore only a very [very] limited number of hosting providers should be encouraged to put in place proactive measures?
It remains, however, that systematic automatic detection is still a form of systematic monitoring.
- While on the whole the EC is pushing for proactive monitoring, it is attempting to do so in a way that is compliant with the General Data Protection Regulation (GDPR), i.e. a piece of secondary legislation which expresses the positive obligation of Member States to respect the fundamental rights to privacy and data protection.
This explains why the GDPR is mentioned several times in the text of the Recommendation. The omnipresence of the GDPR seems to suggest that attempting to argue that content monitoring does not involve the processing of personal data should be a battle difficult to fight.
With this said, there is another piece of legislation that is worth considering, the proposal for an ePrivacy Regulation, currently making its way through the legislative process, which should constrain the monitoring activities of some intermediary providers, e.g. providers of messaging services and web-based e-mail services. Recital 1 of the proposed Regulation states that “The principle of confidentiality should apply to current and future means of communication, including calls, internet access, instant messaging applications, e-mail, internet phone calls and personal messaging provided through social media.” Nevertheless, providers of electronic communications networks and services would be permitted to process electronic communications data in several instances. In respect of content data, 3 main justifications (or legal bases) are available: the processing of the data is necessary to achieve the transmission of the communication, it is necessary to ensure network security, or consent is obtained (see Article 6). Consent would also be required to store content data (see Article 7). The only way to restrict Article 6 and 7 would be through a legislative measure (as per Article 11). [Note that the Court of Justice of the Union held in Tele2 Sverige that the principle of confidentiality already enshrined in Article 5 of the ePrivacy Directive, which the ePrivacy Regulation intends to replace, “applies to the measures taken by all persons other than users, whether private persons or bodies or State bodies.” See here for a comment of the decision]. Has the EC thought about how the principle of confidentiality could/should affect intermediary monitoring practices?
What does the EC really seek to achieve with this Recommendation? Why issue another instrument that [could] raise[s] compatibility issues with existing proposals?
Sophie Stalla-Bourdillon
Pingback: The EU Commission and the tackling of illegal content: is more too much? – Sophie Stalla-Bourdillon | Inforrm's Blog