As we enter a multi-sensory metaverse, audio is playing a bigger role in how we experience online. With the rise of this audio renaissance, we are seeing actions such as spammers and dog piling taking on new forms and negatively impacting our online experience. Audio is a burgeoning frontier for trust & safety as we consider policies, enforcement, and technology to help us respond to this movement. In this master class, our panelists will discuss some of the complex ideas around audio moderation like data privacy, transcription vs proactive detection, and what actions should be taken to keep a community safe.
This past week, our #TSCollective Community Manager, Matt Soeth, sat down with Dayo Akinrinade, CEO and founder of Wisdom; Barry Wright, Group Product Manager at Spotify; and Ryan Treichler, VP of Product Management at Spectrum Labs AI.
Dayo Akinrinade had safety at the forefront of her work at Wisdom. "Nothing is more important than the safety of the community. Even though we are new, we have already established 24x7 human moderation and work on issues of device blocking and bad actor detection. We look forward to continuing to build out sophisticated engines for abuse identification and prevention."
Barry Wright is currently working as a Group Product Manager at Spotify, leading a team of product managers working across multiple aspects of Trust and Safety, including content moderation and legal policy management. Previously, he led the development of online video and television advertising optimization systems for major global agencies and broadcasters, and developed mathematics courseware solutions.
Ryan Treichler is VP of Product Management at Spectrum Labs. Voice features are becoming more prevalent on platforms with user generated content but it can be difficult to moderate accurately and consistently. This is because it's a nascent field with technological and policy complexities a like.
New privacy standards, like GDPR and similar regulations, are informing how platforms track personal identifiable data (PII). This means companies need to put safeguards in place to protect user privacy as we start to navigate user generated content and audio moderation. Strong thought needs to be given to how we can balance privacy with safety. This will impact how we label data sets when training models to detect audio that violates a platform's community guidelines. Once we have those audio files, how long can a platform hang on to them? There is some regulatory guidance in place for user data, but seeing how that applies to audio still needs to be discussed. To that end, as we start looking at audio moderation and new spaces of engagement online, developing an audio moderation framework for best practices across platforms would be a great start.
Looking for more?
Our masterclass is available to watch online. We recorded this session for you to revisit with your teams and absorb all the great information that was shared.
Voice and audio chat communication is becoming more popular on user platforms, but there isn't much research and development behind moderating audio or voice. Download the whitepaper to learn best practices for starting audio moderation, the tools to use, and policies to be aware of.
In addition, you can check out our #TSCollective, a community of trust and safety professionals sharing best practices and support for this heroic work.