Skip to content

Regulatory Compliance

For online platforms, navigating the complex, ambiguous world of Trust & Safety regulations can be a confusing and challenging issue.

The Regulatory Landscape

The internet has grown and changed so quickly that social norms - and regulatory frameworks - have had a difficult time keeping up. Online platforms must balance a number of priorities, providing a safe environment for users, complying with regulatory requirements, and meeting business objectives of revenue and growth. For Trust & Safety leaders, it can feel like everyone is scrambling to manage the implications of online activities.

This has led to an increasingly complex legal and regulatory environment - one that is constantly changing, and quite ambiguous. The ambiguity stems from a lack of standardized terminology, the need for widely accepted behavioral protocols, and varying standards of behavior for different platforms (and even different audiences on those platforms.)

Online platforms must also navigate a gray area in accountability. To what extent are platforms legally responsible for user behavior? Are they legally or morally obligated to prevent certain types of behavior on their site, or are the individuals themselves accountable?

Spectrum Labs offers AI-based solutions that support your efforts at legal compliance, the first step of which is understanding different regulations. However, this is intended to supplement any legal or regulatory advice from alternate sources.

One way to navigate the complicated web of responsibility is to become familiar with regulatory actions at the federal and state levels. Even if a regulation doesn’t affect your current audience, it could indicate a trend heading your way soon. 

For example, China was the first country to require that personal information be stored physically within its borders. Now, other countries have followed suit: Israel, Switzerland, Turkey, Belgium, Brazil, Mexico, India, and Singapore¹ all have enacted data residency legislation. Similarly, changes to Trust & Safety regulations in one region can signify an upcoming change that will affect other locations.

Learn More: Changes to Regulatory Landscape for Trust & Safety Teams

U.S. Communications Decency Act, Section 230

One way to navigate the complicated web of responsibility is to become familiar with regulatory actions, both at the federal Section 230 generally provides immunity from liability for platforms that host third-party content. It also includes protection from civil liability for removing or moderating content under the “Good Samaritan” clause. This means that a platform can remove material that it deems offensive - even constitutionally protected free speech - provided it is done in good faith.

Limits to Immunity Under Section 230

Section 230 doesn’t provide platforms complete immunity, however. They are still required to remove content that is illegal on a federal level, such as copyright infringement. Section 230 was also amended by the FOSTA-SESTA act, requiring the removal of content that violates either federal or state sex trafficking rules.

There is also a legal precedent where an online platform can be held responsible for failure to warn users of potentially dangerous activity. In the case Jane Doe No. 14 v. Internet Brands, Inc.,a user sued a website for failing to warn her of a potential rape scheme; and the courts have so far rejected the platform’s claims of immunity under Section 230.

SESTA and FOSTA

In 2018, Stop Enabling Sex Traffickers Act (SESTA) and Fight Online Sex Trafficking Act (FOSTA) we're signed into law. There were two purposes behind these acts. First, to remove Section 230 protection from online services whose users post ads or content that promote prostitution or human trafficking. Second, to make it illegal to assist, facilitate, or support sex trafficking knowingly. Beyond compliance with SESTA and FOSTA, all online platforms and publications, particularly dating sites, are keenly interested in ensuring that no solicitation for sex or human trafficking appears on their sites. Such solicitations put the platform's users and brand reputation at risk.

 

Children’s Online Privacy Protection Act (COPPA)

Originally enacted in 1998, and expanded in 2013, COPPA is a regulation overseen by the FTC prohibiting online platforms from obtaining personal information from children under the age of 13 without parental consent. Penalties for non-compliance can reach $42,530 per child, per day - enough to damage small to mid-sized companies but, critics argue, not enough to truly damage tech giants like Apple or Microsoft.

In 2019, the FTC levied a fine of $170 million to settle allegations that YouTube violated COPPA by illegally collecting personal information from children without their parent’s consent; the largest COPPA violation fine to date. With the increase in children’s online internet usage resulting from the COVID-19 pandemic, regulators are now calling for more attention to be given to COPPA violations.

Let’s take a deeper dive into what COPPA compliance entails.

PII

Since COPPA bans any personal info from being shared for minors, Spectrum Labs can detect whenever users share PII about themselves or other individuals, including email addresses, phone numbers, and handles for other online platforms (“msg me on <platform>”).

Age verification

Spectrum Labs’ Contextual AI can detect conversational cues that indicate a user is under 13 years old (“when i’m 10 i can get a phone”) and recognize users under 18, helping platforms better comply with COPPA.

 

Children’s Internet Protection Act (CIPA)

The Children’s Internet Protection Act (CIPA) was created in 2000 to help limit a child’s access to obscene or harmful content. It specifically restricts websites accessed by schools or libraries that benefit from the E-rate program. It also requires that they set internet safety policies and address the safety of minors' email, chat rooms, and other forms of online communication.

While regulations can be effective in helping to promote online child safety, they are not a complete resolution in themselves. Such a widespread, complex, and critical issue requires a multidisciplinary approach using the best of technological solutions, thought leadership, and platform innovation to create actionable insights and real solutions to online child safety.

One of the most effective emerging solutions to online child safety is contextual AI. Contextual AI can interpret contextual cues that other technological solutions miss. It can also be used for content moderation on various media, including text, voice, and chat, and in several different languages.

Spectrum Labs provides AI-powered behavior identification models, content moderation tools, and services to help Trust & Safety professionals safeguard user experience from today's threats, and anticipate those that are coming. Because every company has different needs for content moderation, Spectrum Labs has specialized expertise in gaming, dating, social networks, and marketplaces. 

Learn More: Mitigate Compliance Risk with Contextual AI

EU Digital Services Act (proposed)

The EU Digital Services Act (DSA) requires platforms to be transparent about their content moderation practices and notice and action processes. The Brookings Institute² has interpreted this as an effort to balance a platform’s need for a sense of responsibility with their desire to avoid liability.
The DSA also includes additional transparency requirements, stating that platforms must disclose how algorithms work, how content moderation decisions are made, and how advertisers target users. Non-compliance may result in fines of up to 6% of annual global revenue for a single violation, increasing to 10% for repeat violations³.

 

UK Online Safety Bill (proposed)

Since the UK is no longer part of the EU, it will not be covered under the EU DSA. Instead, Parliament created its own regulation, The Online Safety Bill. The Online Safety Bill requires online platforms to remove illegal and harmful behavior and protect children, or they will face a fine of  £18 million or 10% of their annual worldwide revenue. Some of the requirements include:

  • Remove illegal and harmful content quickly, or prevent it from posting in the first place.
  • Prevent children from accessing adult content.
  • Enforce age limits
  • Provide users with clear and easy ways for them to report illegal content.

Nevada Assembly Bill 296

The Nevada anti-doxxing law is intended to prevent certain types of online harassment and intimidation by holding individuals accountable for online actions. Doxxing involves publishing a person’s personally identifiable information (PII) with malicious intent, bringing real-world consequences to a digital-world conflict. The Nevada regulation allows a victim to take civil action to recover damages, legal fees and costs in the event that PII is published without their consent.

Oregon House Bill 3047

Another anti-doxxing law, the Oregon regulation allows victims of doxxing to sue individuals for damages, including injunctive relief and legal fees. This bill does not have an avenue for platform accountability, nor does it seem to pave the way for future legislation that would allow victims to sue platforms.

GDPR

In general, platforms must comply with the consumer data privacy protection laws of the jurisdiction where the end user resides. Many companies opt to set up a data processing center in the EU to keep all of the EU customers' data within the EU and follow GDPR guidelines for this data. Others opt to simplify compliance by abiding by the most restrictive privacy law, which is GDPR. 

GDPR is a set of regulations designed to protect EU citizens' privacy and data. It lays out eight individual rights for all EU citizens, including the right to be informed of how their data is used and the right to be forgotten. Article 4 of GDPR defines two types of entities that collect data:

  • Data Controller. The entity (person, organization, etc.) that determines the why and the how for processing personal data. You are a data controller if you're an internet platform collecting user data for registration and general usage.
  • Data Processor. The entity that processes data on behalf of the controller. Let's say you partner with a marketing automation platform that stores and activates your customers' data. That marketing automation company is a data processor. 

As the Data Controller, you must ensure your Data Processors are in full compliance with GDPR.

How trust and safety teams can prepare for compliance

For online platforms, navigating the complex, ambiguous world of Trust & Safety regulations can be a confusing and challenging issue. Governmental regulations can be an avenue to clarifying issues, providing guidance to standardize things like terminology and acceptable behaviors.

However, regulations can work against simplifying issues as well, adding layers of complexity to online platforms that are already trying to create safe and inclusive environments for their users. Here are a few key steps to keep in mind when updating your operations for regulatory compliance:

  1. Update the trust and safety policy and community guidelines so all of the behaviors in a regulation are covered.
  2. Update internal content moderation AI to reflect the new policy or outsource scaling coverage to a Trust & Safety vendor.
  3. Update content moderation AI actioning speed to meet regulation standards.
  4. Create a process for user reports and an appeals process.
  5. Create a process for collecting data and reports to include in the transparency report.

Learn More: The State of Online Safety Regulations

Working with a trust and safety vendor

A Trust & Safety solution can help platforms stay on top of legal compliance, supporting efforts to protect users and their business. An AI-based solution can provide the consistency, transparency, and accuracy required to comply with a rapidly changing regulatory environment. Benefits of working with a trust & safety vendor include:

  1. Save Product & Engineering Resources.
  2. Achieve Faster Time-to-Safety.
  3. Ensure Unbiased Models.
  4. Maintain Data Privacy & Ownership.
  5. Higher Engagement and Lower Customer Acquisition Costs.

If you would like more information about how Spectrum Labs helps online platforms create safe, inclusive, and legally compliant Trust & Safety solutions for partners, please get in touch with us today.

Contact Spectrum Labs Today

Whether you are looking to safeguard your audiences, increase brand loyalty and user engagement, or maximize moderator productivity, Spectrum Labs empowers you to recognize and respond to toxicity in real-time across languages. Contact Spectrum Labs to learn more about how we can help make your community a safer place.