Skip to content
4 min read

PSA: Stop managing keyword lists

By Lee Davis

Everytime I meet someone new in Trust & Safety and I hear them mention having to maintain lists of keywords or Regular Expressions (RegEx), I cringe a little bit.

Not because I want these hard working people to let their lists become outdated, or their user protections to be lifted causing an increase in toxic behaviors, but because there’s a better way to keep your community safe that doesn’t involve spending time updating lists. And all of that time people are currently spending on list maintenance could be going towards something much more valuable.

So, why am I so opposed to managing lists of keywords and RegEx?

Well, simply put, they’re flawed. They may work well in certain use cases but they are problematic for 5 main reasons:

1. The language of the internet is constantly evolving.

This means that your lists need to be constantly updated in order to keep up. And the problem with manual keywords and RegEx is that you have to see the terms and patterns in action enough times to really understand it and make a rule to catch it. 

2. There can be many exceptions based on context.

Talking about “large melons” when commenting on a fruit salad recipe is very different than talking about the same thing in a dating app. One is totally innocent, and the other can lead to sexual harassment. Another example is around activism. People can use certain terms in order to bring awareness to good and relevant causes without using those terms in a nefarious way. Yet if you have a keyword or RegEx system in place you may miss this context and flood your moderators with false positives. Or you’re stuck with the task of trying to write rules for each and every exception to which there is no limit."

Related Reading: Context Matters: Separating trash-talk from cyberbullying

3. Online users are experts in coded terms and leet speak.

Remember when it was cool to type “how r u” instead of “how are you?” That’s an abbreviation everyone knows but there are millions more out there, with new ones emerging daily, that have to be accounted for in list updates. And that’s just abbreviations. Don’t get me started on emojis, sub- and super- scripts, using other language alphabets such as Cyrillic to replace English characters, and so on. Unless you’re creating rules or keywords for every possible iteration of bad words and phrases, the chances of your users outsmarting your profanity filters is pretty high.

4. Scaling to International Languages.

Few people working in Trust & Safety are fluent in multiple languages. And just about no one is fluent in all of the languages out there (per Wikipedia, there are more than 80 that are spoken by at least 10M people). So how do you handle re-creating your master list of terms and rules from English into new languages as you expand globally? How do you pick up on all the nuances of each region and culture and show any sort of proactivity protecting your users in new locations?

Get the Trust and Safety Assessment Guide

5. Latency.

As you add more and more terms and rules to your master list it will take longer and longer for data to run through all of it causing potential latency for your end users. While patience may be a virtue, it is not something most online users have when trying to interact with a platform. Which means you’re in trouble. Without constantly throwing money at the problem for faster hardware, how do you balance keeping your users safe while also empowering a satisfying user experience?

So what can you do if you don’t want to continue maintaining lists? Enter: Contextual AI

Spectrum’s context-sensing technology uses deep learning to understand the nuanced language of your platform and is able to detect nefarious behaviors in single messages and those that build over time. 

1. We constantly refine models for evolving language.

Our refinement process means that your behavior identification models are constantly being re-trained and updated for the latest trends. We use customer feedback and new data sources to iterate on our models. And we apply transfer learning - meaning if we see new hateful trends in one outlet we can apply learnings from that more broadly, benefiting all customers.

2. We take context into consideration.

We use aspect models to look at the full picture of an entire conversation, user’s stream, forum, etc. and pick up on complex behaviors such as sexual harassment, and the difference between consensual trash talking and cyberbullying. We also incorporate metadata into our models so we’re seeing where the messages are being posted, when, and by whom, so we can make informed determinations. 

3. We expect coded language and leet speak.

Typos, clever (or really not-so-clever) emoji or alphabet replacements to skirt filters, etc. are handled through tailored preprocessing steps and through special character based models. 

4. We have a patent-pending approach to international languages.

Even if there is a need for a lower-resourced language we can use our patent-pending approach to expand behavior identification to new regions. We use an aligned embedding technique that significantly improves our ability to apply existing models to new languages with minimal lead time. 

5. We are built to scale.

At a very high level, running data through a handful of models just takes less time than running through thousands of rules. It helps that we have a background in big data (our engineering leaders previously built what is now the Salesforce Marketing Cloud’s Data and Identity infrastructure) and built our systems with a scale-by-design mentality in order to meet customer’s response time needs.

Now, imagine a world where you are using contextual AI to proactively detect nefarious behaviors and prioritize your moderation efforts, and you no longer have to maintain lengthy lists of keywords and RegEx. What could you be doing with all that free time? Here are a few suggestions:

  • Review trends and patterns and create automated actions so your moderators don’t have to review the most offensive and common behaviors.
  • Think about education opportunities within your platform to focus on behavior modification instead of going straight to kicking users out.
  • Consider positive behavior trends and what you can do to encourage more of those.
  • Or even just go for a walk and come up with your own creative ideas to keep your users safe. (Apparently walking is good for thinking!)

Whatever it is, please consider if maintaining lengthy lists of keywords and RegEx is really empowering you to do your job of keeping your community safe, or if you, too, can see a better option.

Discover AI-Powered Content Moderation

Learn more about how Spectrum Labs can help you create the best user experience on your platform.