European online safety laws pose implementation challenges for online platforms
Proposals to combat online harms are taking shape across Europe and online platforms should make sure in 2022 that they are in line with emerging regulations
The potential for harmful content to be spread via online user content platforms is not new. Nor is the general desire to stamp it out, whether on the part of policymakers, the general public or the platforms themselves.
For the platforms, being a vehicle for the spread of objectionable content, such as terrorism posts, child exploitation material, self-harm encouragement, or fake news has never been a desired look and certainly not a way to attract more users.
Recently, there has been more focus in the media on platforms needing to prioritise user safety over other commercial concerns. However, it is an incredibly complex task, with vast amounts of content at play, much of which falls within a grey area where the potential for harm may vary from user to user and needs to be weighed up against freedom of expression considerations. In many cases, user content platforms have welcomed legislative proposals to better establish industry-wide parameters on what kind of measures are needed to combat online harms. In Europe, these proposals are starting to take shape.
Frameworks of responsibility
The common intention among regulators is to impose a more formalised framework of responsibility on platform providers without, however, calling into question the "safe harbour" principle that requires them to prevent access to harmful user-generated material. In Germany, such a framework is already in place. Its Network Enforcement Act (Netzwerkdurchsetzungsgesetz or NetzDG) and youth protection laws have led the charge in shifting the focus from the original content creator to the online platform through which the content is shared. These laws oblige platforms within scope to build youth protection mechanisms, such as "age gating", moderation systems and a means to flag non-compliant content.
On an EU scale, late 2020 brought the publication of the draft Digital Services Act (DSA). This proposed EU regulation places a large focus on illegal content, including requirements to implement prescribed take-down and complaints-handling mechanisms, to provide clear transparency information about content moderation and to work with law enforcement authorities, including by reporting to them any suspected criminal offences involving a threat to the life or safety of others. The DSA was initially expected to be finalised by mid-2022, but is currently still working its way through the EU's legislative process.
Although criticised by the European Commission, France has established, since August 2021, a regime very close to that provided for in the draft DSA pending its adoption, which imposes the same type of due diligence obligations on platforms in order to fight against the spread of online harms.
Meanwhile, in the UK, the Online Safety Bill is being subjected to pre-legislative scrutiny following the initial draft released in May 2021. While this draft legislation also requires measures to be implemented to deal with illegal content, it goes somewhat further than the DSA by bringing into scope content that may be not be illegal per se, but is of such a nature that there is a material risk that it could be harmful to children (if the service can be accessed by children) or even to adults (for the largest platforms).
Assessing the risk of harm
These laws and proposals look to encourage implementation of effective mechanisms and structures by online platforms, rather than imposing liability based on a user's access to individual pieces of illegal or harmful content. In this respect, platforms will need to be able to demonstrate that the measures they have implemented are generally effective in light of the identified risks. Nevertheless, to reach a position where platforms can meaningfully prevent widespread access to illegal or harmful material, significant thought must inevitably be given as to where to draw the line, on a relatively granular basis, in terms of when specific types of content should be taken down.
In the UK this is particularly challenging, given the extremely wide scope of what may be harmful to a given user. Establishing what a person of "ordinary sensibilities" under the Online Safety Bill looks like, and what constitutes a "significant adverse physical or psychological impact" on them, will keep content moderation teams on their toes for a long while. Even with illegal content – applicable to both the UK and EU proposals and which, on the face of it, may seem easier to identify – there remains a challenge in establishing the extent to which platforms must proactively prevent access.
For instance, if a platform receives a complaint about a user posting objectionable content and has reasonable grounds to believe that the user could potentially post illegal content in the future, does the platform need to take action to remove that user now? Or should it wait for concrete items of illegal content to be posted, by which time the harm might already have been caused? Furthermore, platforms need to be cognisant of "over-takedown", particularly by artificial intelligence-based systems that are not able to assess context in the same way that a human might. For example, content that is clearly satirical or parodic in nature could be picked up in an automated keyword search using scanning technology that cannot distinguish it from genuinely hateful content.
The 2022 challenge
Online platforms will undoubtedly face numerous tricky issues when trying to implement appropriate structures in order to comply with these online safety proposals, while taking care not to be accused of infringing on freedom of expression. They will be looking to regulatory codes of conduct and guidance to help translate the legal requirements into practical examples, particularly for borderline content. Nevertheless, the overarching expectation of lawmakers that platforms must take an intrinsic look at their content moderation practices and potential online safety pitfalls will not change; platforms would be wise to start doing so, in light of the current drafts, as soon as possible.
Connect with one of our experts

Ben Dunham, Lead author Associate Director, UK ben.dunham@osborneclarke.com +44 20 7105 7554

Julia Darcel Senior Associate, France julia.darcel@osborneclarke.com +33 1 84 82 45 42

Philipp Sümmermann, LL.M. Associate, Germany philipp.summermann@osborneclarke.com +49 221 5108 4504

Ashley Hurst Partner, International Head of Technology, Media and Communications ashley.hurst@osborneclarke.com +44 20 7105 7302

Dr. Hans-Christian Woger Counsel, Germany christian.woger@osborneclarke.com +49 30 72 62 18 029