Among the policy objectives to pursue in order to ensure the flourishing of the Digital Single Market, one finds here: “Updating EU audiovisual rules and working with platforms to create a fairer environment for everyone, promote European films, protect children and tackle hate speech.” Clicking on ‘platforms,’ one can also read: “Online platforms are an important part of a thriving digital economy. The European Commission with it proposals wants to ensure that platforms operate properly and illegal content is not online.” The fundamentals of the European Commission’s strategy in relation to online platforms were expressed in the Communication on Online Platforms, published on 25 May 2016. In its mid-term review of the Digital Single Market Strategy on 10 May 2017 the Commission listed two action points:
- “by the end of 2017 to prepare actions to address the issues of unfair contractual clauses and trading practices identified in platform-to-business relationships, including by exploring dispute resolution, fair practices criteria and transparency;
- to ensure better coordination of platform dialogues within the Digital Single Market focusing on the mechanisms and technical solutions for removal of illegal content, with a view to enhancing their effectiveness in full respect of fundamental rights.”
On 28 September 2017, the Commission released its Communication outlining guidance on measures how to tackle illegal content online to address point 2. [They said, they did!].
As I have discussed here, despite the statement of the Commission, the Communication on tackling illegal content raises a series of concerns for the protection of fundamental rights. This is partly due the fact that the Commission thought it could solve the ‘immense’ challenge of content regulation online through the means of a soft law instrument intended to interpret a Directive (i.e. the E-commerce Directive), which is an instrument of partial harmonisation and which is biding upon Member States only as to the results to be achieved, leaving each national authorities to decide upon the methods.
These fundamental rights concerns are suddenly growing when the guidance on measures how to tackle illegal content online is combined with another policy objective, already mentioned in the Communication of May 2016 but not always expressly linked with the strategy for removing illegal content online: to “encourage online platforms to recognise other eID means — in particular those notified under the eIDAS Regulation (EC) 910/2014.” Yet, this is what one can find in the Draft principles and guidance on eID interoperability for online platforms released on 5 October.
At first glance, the eIDAS Regulation of 2014 seems a great thing to have. It is intended to make national electronic identification (eID) schemes talk to each other (i.e. interoperable) in order to reach a higher degree of market integration. Nevertheless, the privacy implications of the eIDAS interoperability framework are not easy to draw. One needs to go beyond Article 12 of the eIDAS Regulation, which provides that the interoperability framework “should facilitate the implementation of the principle of privacy by design,” and examine a series of implementation acts.
To sum up what is happening at the European level now:
- The European Commission on 28 September 2017 asked online platforms :
- to “adopt effective proactive measures to detect and remove illegal content online and not only limit themselves to reacting to notices which they receive.” The use of automatic detection and filtering technologies is expressly encouraged for this purpose.
- to “report to law enforcement authorities whenever they are made aware of or encounter evidence of criminal or other offences in order to alert and enable the relevant authorities to investigate and prosecute individuals generating such content or the abuse of the services by organised criminal or terrorist groups.”
- On 5 October, the European Commission asked (or is proposing to ask as the document is still a draft) online platforms to accept the use of electronic identities (or more precisely electronic identification means) generated by national eID schemes as a way for Internet users to connect to their systems. What does this mean? To translate into non-legalese the European Commission is proposing to ask online platforms to ‘enable’ their users to connect to their systems with their real world identities [even if it is in a digital form].
How is this happening? In two steps:
- A design choice for the EU interoperability framework that has been made mandatory though regulation, i.e. Commission Implementing Regulation (EU) 2015/1501 of 8 September 2015 on the interoperability framework pursuant to Article 12(8) of Regulation (EU) No 910/2014 (see Article 11 and Annex): eID means when crossing borders shall contain a minimum dataset. And what do you find in this “minimum” dataset? Names, date of birth and persistent unique identifiers. There is not (yet) the possibility (or the requirement) for the interoperability framework to include pseudonyms [unless one reads Article 5 of eIDAS as applicable to electronic identification as well but then those who were inolved in the ‘birth’ of the eIDAS Regulation would tell you that pseudonyms are only for trust services] or to allow selective disclosure, as explained here. [It is thus puzzling to read at p. 6 of the guidance on eID interoperability that “users shoudl be able to preserve a level of privacy and aonymity, e.g. by using a pseudonym…” A similar comment has already been inserted below the blog post of the Commission].
- An extension (pushed through soft law, i.e. the principles and guidance on eID interoperability for online platforms if adopted) of the use of eID means in order to ‘enable’ users to access services provided by online platforms (whereas the eIDAS Regulation only covers eID means used to access a service provided by a public sector body online).
What is more, the eIDAS Regulation does not require one specific assurance level for national eID schemes, although it does acknowledge 3 different levels (low, substantial and high) and mutual recognition only applies for the two upper levels. By way of example:
- “assurance level low shall refer to an electronic identification means in the context of an electronic identification scheme, which provides a limited degree of confidence in the claimed or asserted identity of a person, and is characterised with reference to technical specifications, standards and procedures related thereto, including technical controls, the purpose of which is to decrease the risk of misuse or alteration of the identity”
- “assurance level substantial shall refer to an electronic identification means in the context of an electronic identification scheme, which provides a substantial degree of confidence in the claimed or asserted identity of a person, and is characterised with reference to technical specifications, standards and procedures related thereto, including technical controls, the purpose of which is to decrease substantially the risk of misuse or alteration of the identity.”
What if a national eID scheme only specifies a low or substantial assurance level (see .e.g GOV.UK Verify)? What could it mean for users if online platforms systematically report to law enforcement authorities in particular in an age of immediacy? And what could it mean for those who do not want to use eID means to connect to online platforms?
Is the European Commission about to “dramatically” change the fundamentals of online communications?
Note that it is possible to comment upon the draft principles and guidance on eID interoperability for online platforms rather ‘informally,’ either by sending a message to CNECT-EGOVERNMENT-AND-TRUST@ec.europa.eu or posting a comment here BEFORE 10 NOVEMBER.
Sophie Stalla-Bourdillon
Pingback: Law and Media Round Up – 23 October 2017 | Inforrm's Blog
Pingback: Data Protection law and Copyright: Could Art. 29 WP guidance on automated decision-making “help” with filters? | Peep Beep!
Pingback: Around the IP Blogs! OPUS IP Patent Agents/Attorneys Manchester Stockport North-west UK
Pingback: Around the IP Blogs! OPUS IP Patent Agents/Attorneys Manchester Stockport North-west UK
Pingback: The EU Commission and the tackling of illegal content: is more too much? | Peep Beep!
Pingback: The EU Commission and the tackling of illegal content: is more too much? – Sophie Stalla-Bourdillon | Inforrm's Blog