Skip to content
3 min read

Section 230: Its History and Future

By Katie Zigelman

The internet has been a means of open communication since close to the very beginning, back in the bulletin board era.

In 1996, the U.S. government took action with the Communications Decency Act, which was designed to control speech on the internet. Ironically, it contained one of the strongest protections that remains to this day.

What was the Communications Decency Act?

The Communications Decency Act (CDA) was primarily designed to criminalize making indecent and offensive materials "available" to anybody under 18 years of age. It was passed in early 1996, but some of its provisions were ruled unconstitutional by the Supreme Court in 1997. This was due to the broad nature of "indecent" and "offensive," that age verification was, at the time, infeasible, and that the nature of the internet meant it chilled free speech between adults. The act also required cable operators to encrypt adult material to keep it from being intercepted by anyone who was not a subscriber, and made it illegal to harass people over the internet.

What is Section 230?

Section 230, or Title V, of the act, in contrast, provided a vital protection for free speech. The key provision is this: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." In layman's speech, this means that if somebody makes threats via Facebook, Facebook is not held legally responsible. There are exceptions for some criminal and copyright claims (for example, somebody can still require that YouTube take down a video that uses their intellectual property without permission). It also keeps bloggers from being liable for comments left on their blog. It was created in response to a libel suit against bulletin board provider Prodigy, who was held responsible because they moderated content. Section 230 also protects online platforms from getting into trouble for removing content they don't like. In the years to follow, it was often taken too broadly.

One side effect of Section 230 is that it offers unique protections to large online services, which are not the case in Europe or Canada. This is why most are based in the United States.

How is Section 230 Changing?

Section 230 has recently come under attackFOSTA-SESTA creates an exception that requires companies to take steps to prevent their platforms from being used by sex workers, resulting in the closure of Craigslist's personals and new rules on Facebook that mean you can't even flirt with somebody in messages. Republicans are putting forth a bill that requires companies not to moderate in a politically biased manner. Some Democrats would like to see companies held more responsible for user content. Others believe that there are better solutions than messing with Section 230.


What is the Likely Future?

In the current climate, there are likely to be more attacks against Section 230. Unfortunately, tech giants will be less affected by any attempts to gut the law than smaller operators. However, most critics don't want to remove Section 230 altogether. What is much more likely to happen is a narrowing of Section 230 protections, for example, there may be a requirement for companies and platforms to take illegal content down (much the way copyright protection works) when they are made aware of it.

There is also the possibility that some or all attempts to limit Section 230 may be ruled unconstitutional, just as other attempts to censor the internet have been in the past. There has yet to be a true constitutional challenge against the provisions in SESTA/FOSTA. The EFF brought suit, but it was dismissed for lack of standing. If the courts address this particular restriction properly, then we may get a much clearer idea.

So, what should companies do? It's possible that Section 230 may be damaged in a way which makes moderation itself dangerous. However, it's more likely that there will be more requirements to remove illegal and offensive material, which makes moderation more important. AI moderation tools can catch at least some problems before they are even posted, and can help moderators recognize and deal with bad behavior.

In the future, this may help you avoid takedown notices and even liability. It's vital for companies to take steps to moderate in a fair and appropriate way. The more we do so, the less chance there is of an attack on Section 230 happening in the first place.

Learn More About Regulatory Compliance

Learn more about how Spectrum Labs can help you create the best user experience on your platform.