A Growing Threat to Online Communities
Spammers and scammers can erode user experience quality and brand perceptions faster than almost anything else. Constant profile updates with irrelevant content, chat rooms flooded with repeat messages, and “experts” touting the next crypto investment guaranteed to make you a fortune. These are just a few common examples of the spam content that moderators and their community members deal with everyday. What’s worse are the spammer-scammers that go beyond annoying users to actually put them at risk.
According to the FTC, over $770M in reported scams were initiated on social sites in 2021. That’s 18x compared to just four years ago. If you’re feeling this pain, you’re not alone. Spammer-scammers and annoyances are impacting all types of communities. Not just social… they’re in dating, gaming, social media, and even marketplaces. The FTC reported that 45% of 95,000 reported scams were in online shopping settings. Spammers use any platform where they have free access to a large audience.
Spammers Are Increasingly Inventive
Not only do spammers pose a threat to communities, they are financially motivated to remain elusive. Even when detected, they quickly find ways to continue spamming. Some of the tactics we see are extremely adversarial in nature; a cat-and-mouse game that becomes a significant resource drain for engineering and Trust & Safety teams, and a constant moving target. For example, spammers may use multiple variations of misspellings, leet speak, extra spacing, Cyrillic letters or combinations of characters from different languages.
Spammers have become too sophisticated to control easily by identifying the velocity of content, recently created accounts, or common content patterns. It’s a waste of engineering team time to maintain complex block lists for keywords and regular expressions. Teams that are able to maintain extensive keyword and regex blocklists often end up over-triggering and punishing community members who aren’t spamming. Or, the volume of potential spam that requires moderator review becomes unmanageable and results in too many invalid cases.
Many Spammers Are Also Scammers
Oh, and have we mentioned the long cons? Not the one-message / one-chatroom scams or annoyances. These cons often involve relationship building, a level of trust, and an emotional connection with an individual victim. They frequently use sympathy or guilt to con people out of their money or identity information. In the Netflix series Inventing Anna (based on a true story), a woman was swindled out of over $60k by her “friend” who convinced her she was an heiress and assured her she would pay her back. Similarly, in the Tinder Swindler, a man conned people out of millions of dollars using their credit card and identity information, after convincing them he was wealthy and had an emergency that temporarily prevented him from accessing money.
Spammers are opportunists, and have even used recent world events such as Russia’s invasion of Ukraine to find ways to detour donations to fake fundraising. Communities and their Trust & Safety teams may not be able to catch all the spammers, scammers and cons, but we can educate our communities, prevent a large portion of spam, and put more obstacles in their way.
Approaches to Stop Spam
Even when you are able to catch spammers, you can’t just treat them like others who violate community policies. If you immediately ban their account or redact content, they’ll have instantaneous feedback about what not to do or what to do differently. Spammers don’t need any help figuring out how they’re being detected. To them, it’s a game, and they love to play.
Best practices are to allow spammers to continue to comment but prevent other users from seeing it (“ghost comment”), or to ban accounts without notification (“shadow ban”). This creates a delay between detection and when the spammer realizes they’ve been caught, slowing their ability to adapt. Another important anti-spam measure is to balance ease of adoption with added security, such as requiring email validation for account sign-ups, and throttling content submissions if necessary to ensure quality. User acquisition and engagement metrics are important, but the absence of such checks can cause inflated and less meaningful metrics, introduce vulnerabilities, and cause brand reputation risks.
And government regulation is gathering momentum... well-intentioned yet often ineffective government pressure to fix spam/scam issues. We recommend self-regulation such as user safety standards from the OASIS Consortium industry thinktank. As a founding member of OASIS, we understand the challenges of privacy and safety compliance, and helped develop these best practices based on research and expertise.
The UK Online Safety Bill is an upcoming law which will require online companies to tackle a range of harmful and illegal content on their platforms. The bill will require the largest and most popular social media platforms and search engines operating in the UK to prevent paid-for fraudulent advertisements from appearing on their services. This law was introduced after fake advertisements were published including criminals impersonating celebrities to steal data, access bank accounts or promote an investment the famous person has never endorsed.
How You Can Take Action for Your Community
In the US, the FTC provides information about how to avoid a scam. Let’s investigate what’s happening in our communities and implement best practices. Together we can stop spammers, reduce risks to users, and improve the quality of user experiences for greater engagement.
To find out how to stop spammers with an AI solution that adapts as quickly as they do, download Spectrum Labs Spam Solution Guide.
If you’re dealing with spammers on your platform, you are not alone. Join the conversation with other Trust & Safety leaders at #TSCollective to learn the latest in anti-spam best practices.