
June is Pride Month, a time when people within the LGBTQ+ communities are free to celebrate and embrace who they are, as well as their achievements in the arts, science, medicine, business, and society as a whole.
Throughout Pride Month, people will gather for festivities in parks, streets, museums, clubs and lecture rooms. Some 2 million people are expected to march in the NYC Pride Parade alone. They’ll also gather online, on social media platforms and dating apps, which can be a safe haven for people who have yet to come out, or live in intolerant communities.
Sadly, there are still many people among us who have yet to accept that love is love. Some 30% of Americans say it’s “morally wrong” to be gay or bisexual. A subset of these people will actively seek to harass or harm people within the LGBTQ+. This senseless discrimination is a global problem. Last year, Turkish police used tear gas to break up that country’s Pride Parade.
And harassment on social media is so pervasive that The GLAAD Social Media Safety Index warns readers that Facebook, Instagram, TikTok, YouTube and Twitter are “effectively unsafe for the LGBTQ+ community.”
Creating Safe Online Spaces
As a Trust & Safety professional, you are keen to ensure that your LGBTQ+ users or subscribers can enjoy their time on your platform without the risk of harassment. We, at Spectrum Labs, see it as imperative to ensure that all people, regardless of their orientation, are free to explore and express themselves in complete trust and safety in every Internet platform.
Trust and safety begins with detecting and removing harmful speech, and at Spectrum Labs it’s a top priority to stop it from proliferating. But moderating hate speech that targets this community is a pretty tricky thing to accomplish. As Trust & Safety professionals, we understand that certain words and phrases are discriminatory, but some of those words and expressions have been reclaimed by community members, and they’re using it in empowering and connective kinds of ways.
Last month Spectrum Labs hosted the first-ever Safety Matters Summit, which included a panel discussion, “Building and Managing an Inclusive Platform for LGBTQ+,” Savannah Badalich, Head of Policy at Discord said, “I love the word, dyke, because I am a dyke.”
Balancing these two goals- stopping ill-intending slurs from non-community members while empowering LGTBQ+ people to express themselves freely- requires considerable expertise, testing and measurement, and vigilance.
Ways to Approach Content Moderation
We deploy multiple models to surface language that is derogatory or threatening to LGTBQ+ members, including our hate speech, bullying and insults. We’ve found it is an effective strategy to elevate safety and simultaneously promote the freedom to connect is to deploy all three in an online platform. This will help ensure that the online world, often a rare refuge for some LGBTQ+ community members, remains as safe as possible.
Community Guidelines
Users, especially if they’re young, don’t always realize when their language or behavior is offensive to certain communities. A well-developed set of Community Guidelines is essential to help them understand when their behavior crosses a line.
Likewise, an LGBTQ+ user may not realize that the toxicity addressed to them isn’t tolerated on the platform, and that they have rights. Safety is promoted when Community Guidelines are readily available and reinforced by proactive actions on the part of the platform.
Role of People
While technology is an excellent way to augment or scale community moderation, it can never entirely replace the need for humans. Behavior and language evolve over time, and what’s offensive to one person may empower another. All platforms need people with considerable expertise in how language evolves, and can update the detection model as required.
Contextual AI
Once the model is updated, contextual AI can assist in moderating content at scale. Contextual AI analyzes data within its context, meaning it looks at content (the complete raw text) and context whenever possible (e.g. attributes of users and scenario, frequency of offender) in order to classify a behavior and take action if required.
To be sure, creating an inclusive community requires investments in data, technology, and people, but they are investments that will deliver substantial dividends for your users, brands and investors.
Happy Pride!
For a detailed discussion, check out our white paper, LGBTQ+: Celebrating Diversity; Safeguarding Inclusivity.
To contact Spectrum Labs to start protecting your community from hate speech, bullying, harassment, and insults, contact the team here.