Data protection / Internet intermediaries / liability / Privacy

Data Protection & Intermediary liability: how do the French do it?


While the scope of intermediary liability exemptions is being discussed in several places around Europe (and beyond), it is interesting to go back to the Overblog legal saga, which a few years ago had been described by some as pre-announcing other popular sagas, such as the infamous Google Spain case (discussed in a previous post here). is a blogging platform, which was “coded with love in Toulouse” a few years ago. therefore hosts blogs created by its users, and offers both free and premium plans. How does make money? Through advertising and premium plans. Nevertheless, with its “Revenue Share” option offers users the possibility to share advertising revenues as well. Obviously – on top of hosting blogs – provides search functionalities based on topics, popularity and newsworthiness. [This is worth recalling as nowadays many may wonder what platforms and hosting providers actually are…].

Now for the facts of the ‘saga’, Mr X was one of’s user and contributed to different forums through the means of a pseudonym “N…00”. In May 2010, some other users posted information relating to his private life, some of which allegedly amounted to defamation or insult. Mr X asked the platform to remove the contentious information but without success. Mr X then turned to the president of the Béziers First Instance Tribunal (TGI), in interim proceedings, claiming for both the removal of the information and compensation for the damage suffered. The TGI dismissed his claim. Mr X appealed. Eight months later, on 15 December 2011, the Montpellier Court of Appeal (CA) admitted the appeal and ordered the operator of the platform to remove Mr X’s names. As regards damages, a final judgment was needed. This is the reason why a second judgment was delivered by the TGI a few years later, in 2014, but dismissing the claim for compensation. Mr X did not give up the battle and appealed the 2014 judgement. Three years later the CA held liable on the ground of Article 9 of the French Civil Code (concerning protection of private life) and the French data protection legislation (Loi Informatique et libertés).

What does this decision tell us about the interplay between data protection law and the liability exemptions deriving from the E-commerce Directive and transposed into French Law by the Digital Economy Law (575/2004) of 21 June 2004?

As hinted above, while Mr X was using a pseudonym, some other users had revealed his first name, surname, his town, his email address, and had associated this information with other pieces of information relating to his private life (marital relationship, child custody, allegation of paedophilia). Besides, when entering his name in the search engine [which search engine? This is not specified in the 2017 decision but in the 2011 decision one finds the answer: Google of course!], the results then referred to several web pages of is described by the CA as “the editor of a site or internet platform, which provides Internet users disk space as well as software tools allowing them to create their own blogs”.

The CA notes that the contentious information had remained accessible to Internet users through the platform for 18 months. More precisely, by maintaining for 18 months the contentious information, whereas Mr X did not have the possibility to delete it, the operator of the platform had committed an act of negligence causing a moral (non-material) damage (evaluated at 7 500 euros) to the appellant.

What is happening here? To fully understand the motives of the 2017 decision it is helpful to read the 2011 decision of  the CA as well. The reasoning of the CA can be broken down into 3 points:

  1. The platform, or better the operator of the platform, is the data controller of the information uploaded by its users. Why? Because the operator collects, stores, and organises user-generated information, while retaining the right on the basis of its terms of use to suspend the transmission or diffusion of the information by its users in cases of abuse. [What does this characterisation show? Does it mean that the act of the operator of the platform is in fact distinct from the act of the uploader, its user (for a similar discussion see my previous post)? What is the act having caused the damage? Most likely, the unlawful act of the user, not that of the operator of the platform, in other words the act of uploading information relating to the private life of another on a blog that is accessible to a community of users. This is the reason why the liability exemptions coming from the E-commerce Directive are maybe not essential in a case of this type.]
  2. With this said, the CA finds that the E-commerce Directive exemptions, and in particular Art. 14 on ‘hosting’, are applicable in a case of this type, as the operator of the platform is a hosting provider. But no further explanation is provided. This might suggest that the CA does not really appreciate the variety of activities at stake, whilst it finally opts for a sensible and pragmatic solution.
  3. However, because Mr. X had first tried to obtain the removal of the contentious information from the operator of the platform, the latter had therefore been made aware of the facts and circumstances from which the illegal activity or information was apparent. Yet the operator of the platform had refused to remove the information for 18 months. Liability had thus to follow.

Once having read the decision of the CA, it becomes interesting to compare it with the first decision (“ordonnance”) of the TGI. The TGI had noted two important elements in its decision of 2011.

  1. Mr X had never tried to contact the “editor of the blog” to use the terms of the TGI.
  2. The TGI had referred to the decision of the French Constitutional Court n°2004-496 of 10 June 2004 to hold that even in cases in which a hosting provider is informed of an unlawful activity through the means of a notification, it should only remove the information if the activity is manifestly unlawful, i.e. if it amounts to endorsement of crimes against humanity, incitement to racial hatred, pornography, incitement to violence or violations of human dignity.

The conclusion of the TGI was thus that Mr. X’s claim had in fact to be rejected.

Who took the best approach?

Note the approach of the TGI is particularly fascinating when compared, this time, with the recent decision of the European Court of Human Rights (ECtHR) of 7 February 2017 in the case Rolf Anders Daniel PIHL v. Sweden (an admissibility decision).

In a nutshell, this last case (the third of the trilogy starting with Delfi – see my previous case commentaries here and here) concerned a blog post accusing the applicant of being involved in a Nazi party and followed by an ‘anonymous’ user comment expressing the view that “that guy pihl is also a real hash-junkie according to several people I have spoken to.” To be more precise, “[t]he blog on which the post appeared was a small one run by a non-profit association. Although the blog allowed comments to be posted, it was clearly stated that such comments were not checked before publication and that commentators were responsible for their own statements. Commentators were therefore requested to “display good manners and obey the law”.” [Not to be misleading, a blogger, or the person in charge of running a blog, is not necessarily the same ‘thing’ or actor as the operator of a blogging platform.]

The applicant had submitted “that the post and the comment constituted defamation and that the association was responsible for both – for the latter because the association had failed to remove it immediately.”

While the ECtHR (3rd section) ultimately rejects the application, it does seem to reaffirm its view that if hate speech or incitement to violence is at stake in a comment section comprising user-generated content, it is acceptable to hold the person in charge of running a blog strictly liable for the comments posted by its users, in particular when it is not a non-profit entity.

Paragraph 39 of Rolf Anders reads as follows:

“In view of the above, and especially the fact that the comment, although offensive, did not amount to hate speech or incitement to violence and was posted on a small blog run by a non-profit association which took it down the day after the applicant’s request and nine days after it had been posted, the Court finds that the domestic courts acted within their margin of appreciation and struck a fair balance between the applicant’s rights under Article 8 and the association’s opposing right to freedom of expression under Article 10.”

What do readers think? Who got it right… or at least better than the others?

Sophie Stalla-Bourdillon

One thought on “Data Protection & Intermediary liability: how do the French do it?

  1. Pingback: Data Protection and Intermediary liability: how do the French do it? – Sophie Stalla-Bourdillon | Inforrm's Blog

Leave a Reply