<iframe src="https://www.googletagmanager.com/ns.html?id=GTM-M9ZWWLT" height="0" width="0" style="display:none;visibility:hidden">
Skip to content

Regulatory Compliance

For online platforms, navigating the complex, ambiguous world of Trust & Safety regulations can be a confusing and challenging issue.

 

The internet has grown and changed so quickly that social norms - and regulatory frameworks - have had a difficult time keeping up. Online platforms must balance a number of priorities, providing a safe environment for users, complying with regulatory requirements, and meeting business objectives of revenue and growth. For Trust & Safety leaders, it can feel like everyone is scrambling to manage the implications of online activities.

This has led to an increasingly complex legal and regulatory environment - one that is constantly changing, and quite ambiguous. The ambiguity stems from a lack of standardized terminology, the need for widely accepted behavioral protocols, and varying standards of behavior for different platforms (and even different audiences on those platforms.)

Online platforms must also navigate a gray area in accountability. To what extent are platforms legally responsible for user behavior? Are they legally or morally obligated to prevent certain types of behavior on their site, or are the individuals themselves accountable?

Spectrum Labs offers AI-based solutions that support your efforts at legal compliance, the first step of which is to understand different regulations. This is not intended to take the place of any legal or regulatory advice from alternate sources, however.

Download the Webinar: Tech Policy Changes on the Horizon

One way to navigate the complicated web of responsibility is to become familiar with regulatory actions, both at the federal and state level. Even if a regulation doesn’t affect your current audience, it could indicate a trend that is heading your way in the near future. 

For example, China was the first country to require that personal information be stored physically within its borders. Now, other countries have followed suit: Israel, Switzerland, Turkey, Belgium, Brazil, Mexico, India, and Singapore¹ all have enacted data residency legislation. Similarly, changes to Trust & Safety regulations in one region can signify an upcoming change that will affect other locations.


Regulations You Should KnowRegulations You Should Know

Regulations You Should Know

Regulations You Should Know

U.S. Communications Decency Act, Section 230

One way to navigate the complicated web of responsibility is to become familiar with regulatory actions, both at the federalSection 230 generally provides immunity from liability for platforms that host third-party content. It also includes protection from civil liability for removing or moderating content under the “Good Samaritan” clause. This means that a platform can remove material that it deems offensive - even constitutionally-protected free speech - provided it is done in good faith.

Limits to Immunity under Section 230

Section 230 doesn’t provide platforms complete immunity, however. They are still required to remove content that is illegal on a federal level, such as copyright infringement. Section 230 was also amended by the FOSTA-SESTA act, requiring the removal of content that violates either federal or state sex trafficking rules.

There is also a legal precedent where an online platform can be held responsible for failure to warn users of potentially dangerous activity. In the case Jane Doe No. 14 v. Internet Brands, Inc., a user sued a website for failing to warn her of a potential rape scheme; and the courts have so far rejected the platform’s claims of immunity under Section 230.

Potential Changes to Section 230

Politicians have discussed making changes to the 20-year-old legislation, from amending the protections provided online platforms to revoking Section 230 completely. The Department of Justice conducted a review in 2020, and made recommendations for amendments to Congress.

These include:

  • Incentivizing platforms to deal with illicit content.
  • Removing protections from civil lawsuits brought by the federal government
  • Disallowing antitrust protections for large platforms
  • Defining ambiguous terms such as ‘otherwise objectionable’ or ‘in good faith’
  • Requiring transparency, making moderation actions publicly accessible (unless they pose a risk to an individual or law enforcement proceeding)

    Another proposed change that would limit platform protections under Section 230 is the Health Misinformation Act. If this is passed, it would suspend Section 230 protections when social platforms boost anti-vaccine conspiracies or other types of health misinformation.

 

Children’s Online Privacy Protection Act (COPPA)

Originally enacted in 1998, and expanded in 2013, COPPA is a regulation overseen by the FTC prohibiting online platforms from obtaining personal information from children under the age of 13 without parental consent. Penalties for non-compliance can reach $42,530 per child, per day - enough to damage small to mid-sized companies but, critics argue, not enough to truly damage tech giants like Apple or Microsoft.

In 2019, the FTC levied a fine of $170 million to settle allegations that YouTube violated COPPA by illegally collecting personal information from children without their parent’s consent; the largest COPPA violation fine to date. With the increase in children’s online internet usage resulting from the COVID-19 pandemic, regulators are now calling for more attention to be given to COPPA violations.


 

EU Digital Services Act (proposed)

The EU Digital Services Act (DSA) maintains the current rule that provides immunity for platforms that host third-party data but adds new obligations for online platforms. The DSA requires platforms to be transparent about their content moderation practices, as well as notice and action processes. The Brookings Institute² has interpreted this as an effort to balance a platform’s need for a sense of responsibility with their desire to avoid liability.

The DSA also includes additional transparency requirements, stating that platforms must disclose how algorithms work, how content moderation decisions are made, and how advertisers target users. Non-compliance may result in fines of up to 6% of annual global revenue for a single violation, increasing to 10% for repeat violations³.


Nevada Assembly Bill 296

The Nevada anti-doxxing law is intended to prevent certain types of online harassment and intimidation by holding individuals accountable for online actions. Doxxing involves publishing a person’s personally identifiable information (PII) with malicious intent, bringing real-world consequences to a digital-world conflict. The Nevada regulation allows a victim to take civil action to recover damages, legal fees and costs in the event that PII is published without their consent.

Read the blog: Anti-Doxxing Laws: a Trust & Safety Concern


Connecticut Senate Bill 989 (proposed)

The Connecticut anti-doxxing law is also intended to prevent online harassment and intimidation. It allows a victim to sue the individual that posts PII online without permission, as well as anyone that financially benefits from the posting.


Oregon House Bill 3047

Another anti-doxxing law, the Oregon regulation allows victims of doxxing to sue individuals for damages, including injunctive relief and legal fees. This bill does not have an avenue for platform accountability, nor does it seem to pave the way for future legislation that would allow victims to sue platforms.

Learn More: What is Doxxing, and Why Does it Matter?


For online platforms, navigating the complex, ambiguous world of Trust & Safety regulations can be a confusing and challenging issue. Governmental regulations can be an avenue to clarifying issues, providing guidance to standardize things like terminology and acceptable behaviors.

However, regulations can work against simplifying issues as well, adding layers of complexity to online platforms that are already trying to create safe and inclusive environments for their users.

A Trust & Safety solution can help platforms stay on top of legal compliance, to support efforts to protect users and protect their business. An AI-based solution can provide the consistency, transparency, and accuracy required to comply with a rapidly changing regulatory environment. For more information about how Spectrum Labs helps online platforms create safe, inclusive, and legally compliant Trust & Safety solutions for partners, contact us today.


  1. https://www.parkplacetechnologies.com/blog/companies-managing-new-data-residency-requirements/
  2. https://www.brookings.edu/blog/techtank/2021/01/06/four-lessons-for-u-s-legislators-from-the-eu-digital-services-act/
  3. https://www.americanactionforum.org/insight/the-eus-digital-services-act-a-primer/#:~:text=If%20platforms%20fail%20to%20comply,the%20case%20of%20repeat%20offenders.

 

Contact Illustration

Whether you are looking to safeguard your audiences, increase brand loyalty and user engagement, or maximize moderator productivity, Spectrum Labs empowers you to recognize and respond to toxicity in real-time across languages.

Contact Spectrum Labs to learn more about how we can help make your community a safer place.

Contact Spectrum Labs Today