Skip to content
4 min read

The UK Online Safety Bill and its requirements for compliance

By Hetal Bhatt

We've talked a lot about upcoming European regulations like the Digital Services Act (DSA), which seeks to establish continent-wide standards for online safety. The DSA is especially focused on protecting children and stopping the spread of unlawful content across online platforms.

Since it's no longer part of the European Union, the United Kingdom will not be covered by the DSA. Instead, the UK Parliament have written the Online Safety Bill that sets consistent safety requirements of their own. The bill states it will make online companies "more responsible for their users' safety on their platforms" and contains many mandates similar to the EU's DSA. 

Although Parliament is still hashing out the Online Safety Bill, platforms that wish to serve the British public will need to comply upon its enactment or face fines up to £18 million or 10% of their annual worldwide revenue (whichever is larger).

 

What does the Online Safety Bill require?

The Online Safety Bill will hold platforms with user-generated content legally responsible for failing to keep illegal content off their site. Like the DSA, the bill places priority on the safety of children, and lays out the following requirements for online platforms to protect minors:

  • Remove illegal and harmful content quickly, or prevent it from posting in the first place. This includes content that is not necessarily unlawful but promotes harmful behavior like eating disorders, self-harm, racism, anti-Semitism, or misogyny.
  • Prevent children from accessing adult content.
  • Enforce age limits and implement age verification to keep children under age 13 off social media platforms.
  • Publish risk assessments that transparently show the risks and dangers to children on the largest social media platforms.
  • Provide users with clear and easy ways to report illegal content when they encounter it.

For the general public, the Online Safety Bill touts a "triple shield" of protection to ensure a safe online experience. Specifically, the bill mandates the following for online platforms:

  1. Remove all illegal content.
  2. Remove content that is banned by the platform's own terms & conditions.

  3. Implement tools for users to filter the type of content they see and avoid potentially harmful content that they don't want on their feeds.

When the Online Safety Bill is enacted, platforms will have to show that they have processes in place to meet these requirements. The UK's Office of Communications ("Ofcom") will routinely check those processes for effectiveness and take action when they don't work.

Failure to comply with the bill won't just result in fines – penalties could also include criminal action against senior company managers who don't fulfill information requests from Ofcom. In the most extreme cases, Ofcom could leverage court action to force payment providers, advertisers, and internet service providers to stop working with a site, effectively blocking it from access in the UK.

 

How Spectrum Labs can help platforms meet compliance with the Online Safety Bill

Spectrum Labs' online safety solutions address each pillar of the Online Safety Bill. With advanced Contextual AI and customizable automations, Spectrum Labs can help online platforms implement effective solutions for ensuring user safety and scale those operations with minimal overhead.

 
Protecting children

Spectrum Labs' Contextual AI is not only effective at detecting harmful behavior toward children, it also can analyze an array of metadata to recognize underage users on a platform.

Through behavior identification solutions, Spectrum Labs can detect when underage users are lying about their age and prevent them from accessing adult content. Spectrum Labs specifically has solutions for detecting users under age 18 (who are banned from joining adult platforms) and under age 13 (who are banned from joining social media altogether) to keep platforms compliant with the Online Safety Bill.

 
Removing illegal content

Contextual AI is more than just profanity filters and keyword blacklists. It also can detect a variety of harmful behavior that often tries to subvert typical content moderation efforts.

The Online Safety Bill has a long list of illegal behavior and content that must be removed from platforms. Spectrum Labs has a similar list of content that its solutions can detect:

 
Making it easier to report illegal content

The Online Safety Bill requires sites to create ways for users to report illegal content. Spectrum Labs gives online platforms the infrastructure to implement user-friendly methods for reporting illegal behavior, along with the ability to set up automations to act on repetitive content with more speed and efficiency.

 
Scaling content moderation operations

Like the DSA, the Online Safety Bill requires platforms to remove illegal content quickly.

Platforms using Contextual AI can moderate at scale by configuring automated actions for frequently detected types of content, leaving only the most severe cases to human review. When moderators aren't overwhelmed with routine violations, they can process urgent cases much quicker.

Additionally, Spectrum Labs features user-level moderation that can further scale the removal of harmful behavior by pinpointing the relatively few users who create the bulk of illegal content on platforms. By using user reputation scores, Contextual AI is able to isolate a platform's most routinely toxic users and refer them for action. This allows overall moderation efforts to work in a more efficient manner by detecting illegal content at the source rather than chasing hundreds or thousands of individual posts.

 

Further information

The easiest way to learn how Spectrum Labs can help your online platform become compliant with government regulations is to contact us!

Nevertheless, if you'd like to read more, check out the following:

Learn more about how Spectrum Labs can help you create the best user experience on your platform.