13/22
  • Pages
01 Cover
02 Foreword
03 Technology
04 Legislators worldwide move to adopt regulation by design
05 How the 'green' cloud is powering sustainable tech and cloud services
06 How digital transformation is reshaping workforce solutions
07 India's billion and a half population needs new data privacy laws
08 Is the gender divide closing in Europe's tech sector?
09 What do vendors need to consider when providing outsourcing services in the financial and insurance industry?
10 Media
11 Fair pay in Hollywood – how does it translate to Europe?
12 'Greenwashing' gets the regulatory red light
13 European online safety laws pose implementation challenges for online platforms
14 The next phase of the streaming wars
15 The New Deal is a Big Deal
16 Europe's debate rumbles on over the copyright protection of AI-created works
17 Communications
18 Internet of Things gets greener and more democratic
19 Why it is a good time for investors in telecoms infrastructure
20 Are unified communications an essential component for business transformation?
21 The rise of telecoms infrastructure companies
22 Back page

Media

Media
Legal implications of the metaverse
Fair pay in Hollywood: how does it translate to Europe?
'Greenwashing' gets the regulatory red light
European online safety laws pose implementation challenges for online platforms
The next phase of the streaming wars
The New Deal is a Big Deal
Europe's debate rumbles on over the copyright protection of AI-created works
Back to Foreword

European online safety laws pose implementation challenges for online platforms

Proposals to combat online harms are taking shape across Europe and online platforms should make sure in 2022 that they are in line with emerging regulations

The potential for harmful content to be spread via online user content platforms is not new. Nor is the general desire to stamp it out, whether on the part of policymakers, the general public or the platforms themselves.

For the platforms, being a vehicle for the spread of objectionable content, such as terrorism posts, child exploitation material, self-harm encouragement, or fake news has never been a desired look and certainly not a way to attract more users.

Recently, there has been more focus in the media on platforms needing to prioritise user safety over other commercial concerns. However, it is an incredibly complex task, with vast amounts of content at play, much of which falls within a grey area where the potential for harm may vary from user to user and needs to be weighed up against freedom of expression considerations. In many cases, user content platforms have welcomed legislative proposals to better establish industry-wide parameters on what kind of measures are needed to combat online harms. In Europe, these proposals are starting to take shape.

Resources

> Osborne Clarke - Online Safety Bill: Parliament and DCMS proceed with separate scrutiny and inquiry into the draft online safety law
> Osborne Clarke - OSB in focus: will the Online Safety Bill increase the scope for actions brought by individuals against online platforms?
> Osborne Clarke - OSB in focus: is the Online Safety Bill a skeleton or a springboard for dynamic regulation?
> Osborne Clarke - OSB in focus: what categories of content and communications are within the Online Safety Bill's scope?
> Osborne Clarke - OSB in focus: what types of service will be caught by the UK Online Safety Bill?

Frameworks of responsibility

The common intention among regulators is to impose a more formalised framework of responsibility on platform providers without, however, calling into question the "safe harbour" principle that requires them to prevent access to harmful user-generated material. In Germany, such a framework is already in place. Its Network Enforcement Act (Netzwerkdurchsetzungsgesetz or NetzDG) and youth protection laws have led the charge in shifting the focus from the original content creator to the online platform through which the content is shared. These laws oblige platforms within scope to build youth protection mechanisms, such as "age gating", moderation systems and a means to flag non-compliant content.

On an EU scale, late 2020 brought the publication of the draft Digital Services Act (DSA). This proposed EU regulation places a large focus on illegal content, including requirements to implement prescribed take-down and complaints-handling mechanisms, to provide clear transparency information about content moderation and to work with law enforcement authorities, including by reporting to them any suspected criminal offences involving a threat to the life or safety of others. The DSA was initially expected to be finalised by mid-2022, but is currently still working its way through the EU's legislative process.

Although criticised by the European Commission, France has established, since August 2021, a regime very close to that provided for in the draft DSA pending its adoption, which imposes the same type of due diligence obligations on platforms in order to fight against the spread of online harms.

Meanwhile, in the UK, the Online Safety Bill is being subjected to pre-legislative scrutiny following the initial draft released in May 2021. While this draft legislation also requires measures to be implemented to deal with illegal content, it goes somewhat further than the DSA by bringing into scope content that may be not be illegal per se, but is of such a nature that there is a material risk that it could be harmful to children (if the service can be accessed by children) or even to adults (for the largest platforms).

Assessing the risk of harm

These laws and proposals look to encourage implementation of effective mechanisms and structures by online platforms, rather than imposing liability based on a user's access to individual pieces of illegal or harmful content. In this respect, platforms will need to be able to demonstrate that the measures they have implemented are generally effective in light of the identified risks. Nevertheless, to reach a position where platforms can meaningfully prevent widespread access to illegal or harmful material, significant thought must inevitably be given as to where to draw the line, on a relatively granular basis, in terms of when specific types of content should be taken down.

In the UK this is particularly challenging, given the extremely wide scope of what may be harmful to a given user. Establishing what a person of "ordinary sensibilities" under the Online Safety Bill looks like, and what constitutes a "significant adverse physical or psychological impact" on them, will keep content moderation teams on their toes for a long while. Even with illegal content – applicable to both the UK and EU proposals and which, on the face of it, may seem easier to identify – there remains a challenge in establishing the extent to which platforms must proactively prevent access.

For instance, if a platform receives a complaint about a user posting objectionable content and has reasonable grounds to believe that the user could potentially post illegal content in the future, does the platform need to take action to remove that user now? Or should it wait for concrete items of illegal content to be posted, by which time the harm might already have been caused? Furthermore, platforms need to be cognisant of "over-takedown", particularly by artificial intelligence-based systems that are not able to assess context in the same way that a human might. For example, content that is clearly satirical or parodic in nature could be picked up in an automated keyword search using scanning technology that cannot distinguish it from genuinely hateful content.

The 2022 challenge

Online platforms will undoubtedly face numerous tricky issues when trying to implement appropriate structures in order to comply with these online safety proposals, while taking care not to be accused of infringing on freedom of expression. They will be looking to regulatory codes of conduct and guidance to help translate the legal requirements into practical examples, particularly for borderline content. Nevertheless, the overarching expectation of lawmakers that platforms must take an intrinsic look at their content moderation practices and potential online safety pitfalls will not change; platforms would be wise to start doing so, in light of the current drafts, as soon as possible.

Connect with one of our experts

Ben Dunham, Lead author Associate Director, UK ben.dunham@osborneclarke.com +44 20 7105 7554

Julia Darcel Senior Associate, France julia.darcel@osborneclarke.com +33 1 84 82 45 42

Philipp Sümmermann, LL.M. Associate, Germany philipp.summermann@osborneclarke.com +49 221 5108 4504

Ashley Hurst Partner, International Head of Technology, Media and Communications ashley.hurst@osborneclarke.com +44 20 7105 7302

Dr. Hans-Christian Woger Counsel, Germany christian.woger@osborneclarke.com +49 30 72 62 18 029

Resources

> Osborne Clarke - Online Safety Bill: Parliament and DCMS proceed with separate scrutiny and inquiry into the draft online safety law
> Osborne Clarke - OSB in focus: will the Online Safety Bill increase the scope for actions brought by individuals against online platforms?
> Osborne Clarke - OSB in focus: is the Online Safety Bill a skeleton or a springboard for dynamic regulation?
> Osborne Clarke - OSB in focus: what categories of content and communications are within the Online Safety Bill's scope?
> Osborne Clarke - OSB in focus: what types of service will be caught by the UK Online Safety Bill?
Media
Legal implications of the metaverse
Fair pay in Hollywood: how does it translate to Europe?
'Greenwashing' gets the regulatory red light
European online safety laws pose implementation challenges for online platforms
The next phase of the streaming wars
The New Deal is a Big Deal
Europe's debate rumbles on over the copyright protection of AI-created works
Back to Foreword
Back to top