As we enter a multi-sensory metaverse, audio is playing a bigger role in how we experience online. With the rise of this audio renaissance, we are seeing actions such as spammers and dog piling taking on new forms and negatively impacting our online experience. Audio is a burgeoning frontier for trust & safety as we consider policies, enforcement, and technology to help us respond to this movement. In this master class, our panelists will discuss some of the complex ideas around audio moderation like data privacy, transcription vs proactive detection, and what actions should be taken to keep a community safe.
This past week, our #TSCollective Community Manager, Matt Soeth, sat down with Dayo Akinrinade, CEO and founder of Wisdom; Barry Wright, Group Product Manager at Spotify; and Ryan Treichler, VP of Product Management at Spectrum Labs AI.
Dayo Akinrinade had safety at the forefront of her work at Wisdom. "Nothing is more important than the safety of the community. Even though we are new, we have already established 24x7 human moderation and work on issues of device blocking and bad actor detection. We look forward to continuing to build out sophisticated engines for abuse identification and prevention."
- Safety at the forefront, Wisdom hosts 1:1 conversations which enables person to person connection.
- Users do need to feel safe in order to share ideas, connection, and express themselves
- Friction was built into the onboarding process: creating accounts, user names, verification, and so on.
- On Wisdom, as part of the verification process, creators link their profile to other social media profiles like Twitter and LinkedIn.
- Wisdom verifies top mentors
- 24/7 moderation to review content
- Code of conduct that all users, creators and speakers agree to before joining a conversation
Barry Wright is currently working as a Group Product Manager at Spotify, leading a team of product managers working across multiple aspects of Trust and Safety, including content moderation and legal policy management. Previously, he led the development of online video and television advertising optimization systems for major global agencies and broadcasters, and developed mathematics courseware solutions.
- Tech, like anything, is a tool for trust and safety. Through using a mix of tools: user reporting, automation and moderation, platforms can best make a difference in creating a safe community
- User education is an important tool as well. This allows a platform to show what their tools are, how they work, and to make users aware of anything that may be expecting (or the opposite of what they are expecting)
- Tailor interventions to the policy/format of the content
- Take stock of what’s feasible now vs what’s feasible in two years? The tech may be there, but it doesn’t mean it’s affordable. When that happens, you will see a high rate of adoption.
- Create a plan: 2 year plan vs 5 year plan, etc. Where do we want to go as a platform? What type of experience do we hope to create?
Ryan Treichler is VP of Product Management at Spectrum Labs. Voice features are becoming more prevalent on platforms with user generated content but it can be difficult to moderate accurately and consistently. This is because it's a nascent field with technological and policy complexities a like.
- As user generated content increases (UGC), tech needs to rise up and meet those challenges
- A lot of early audio moderation started and runs with key words, and voice to transcription which can be effective but is slow and inefficient.
- When it comes to audio, do we have access to listen to that stream (live, after the fact) and action it? Can we go back and listen to it? Is real time realistic? The technology is getting more affordable, offering solutions that provide sentiment analysis on complex language strands.
- In an ideal world, we want to have proactive intervention vs just user reporting (or both). This takes the pressure off the user to report a violation every time it happens. Good tech also frees up moderators to focus on high level issues.
- Updates in technology provide platforms with more data. More data, with better analytics, will help platforms identify a bad actor and take appropriate action.
New privacy standards, like GDPR and similar regulations, are informing how platforms track personal identifiable data (PII). This means companies need to put safeguards in place to protect user privacy as we start to navigate user generated content and audio moderation. Strong thought needs to be given to how we can balance privacy with safety. This will impact how we label data sets when training models to detect audio that violates a platform's community guidelines. Once we have those audio files, how long can a platform hang on to them? There is some regulatory guidance in place for user data, but seeing how that applies to audio still needs to be discussed. To that end, as we start looking at audio moderation and new spaces of engagement online, developing an audio moderation framework for best practices across platforms would be a great start.
Looking for more?
Masterclass On-Demand | The Wild West of Audio Renaissance
Our masterclass is available to watch online. We recorded this session for you to revisit with your teams and absorb all the great information that was shared.
Whitepaper | The Increasing Use of Audio
Voice and audio chat communication is becoming more popular on user platforms, but there isn't much research and development behind moderating audio or voice. Download the whitepaper to learn best practices for starting audio moderation, the tools to use, and policies to be aware of.
In addition, you can check out our #TSCollective, a community of trust and safety professionals sharing best practices and support for this heroic work.