Today, when someone writes a caption for a feed post and our AI detects the caption as potentially offensive, they will receive a prompt informing them that their caption is similar to those reported for bullying. They will have the opportunity to edit their caption before it’s posted.
Features like this, ones that give people a light check, are very helpful and should be deployed in all online communities.
Unfortunately, not every online community has the engineering might to design and deploy these sorts of features (Even Facebook can’t deploy this feature across all languages immediately.)
We help online communities of all shapes and sizes deploy features like this immediately and across all content types - not just captions - and across all languages.
Rolling out anti-cyberbullying features shouldn’t just be the purview of the wealthiest communities. It should be available for all.