This year will be an eventful one for Trust & Safety teams all over the world. Governments around the world are losing patience with platforms that don’t adequately protect their users from bullying, abuse and harmful content.
In two instances, Australia and the UK, the governments are demanding that Internet platform providers actively police the content their users share, and to remove what appears on their platform through regulations.
Australia Online Safety Bill 2021
On June 23, 2021, the Australian Parliament passed the Online Safety Bill 2021, which seeks to tame cyber abuse, cyberbullying, and unwanted sharing of intimate content and images. It also gives the eSafety Commissioner more powers to compel platforms to remove toxic content, and provide the details of users who post offensive content when asked.
The Australian government warns platforms that operate in its borders that the Act “will have significant implications for online service providers because it makes them more accountable for the online safety of their users.”
What’s more, the eSafety Commission can now require platforms to block access to violent content and images, as well as intimate content, such as revenge porn.
The Act is a direct outcome of a High Court ruling that Internet platforms can be held liable for defamatory and violent content that appears on their sites. Think of it as the anti-Section 230. As a result of the ruling, CNN blocked Australians from accessing its Facebook page.
(The Facebook pages of all news sites tend to be rife with toxic content, with users regularly insulting one another. Stories about COVID-19 generate a lot of othering and suggestions that people should be denied medical care if they fall ill. CNN would better service its followers by allocating resources to moderation of its pages rather than implement global blocks.
UK Online Safety Bill
Meanwhile, the UK is working to strengthen its Online Safety Bill. The law will be revised to accomplish three goals:
Prevent the spread of illegal content and activity such as images of child abuse, terrorist material and hate crimes, including racist abuse
Protect children from harmful material
Protect adults from legal - but harmful - content
According to the UK Government, “extra priority illegal offences to be written on the face of the bill include revenge porn, hate crime, fraud, the sale of illegal drugs or weapons, the promotion or facilitation of suicide, people smuggling and sexual exploitation. Terrorism and child sexual abuse are already included.”
The child advocacy group, NSPCC, wants to see the bill strengthened in multiple ways, including, assigning an “overarching general safety duty” within platforms. The group wants the government to mandate that all platforms appoint a Chief Trust & Safety Officer who “sits above” all of the safety duties spelled out by the law.
Spectrum is an advocate for a C-suite level Trust & Safety officer for all platforms, and would be pleased to see this change made to the law, and ensuring that trust and safety principles are embedded in every aspect of a platform’s decision-making process.
GDPR, CCPA, CPRA will have implications to the ways in which Spectrum works with its clients, and the user data that we touch.
Most notably, CPRA creates new classes of “sensitive data” and offers users more control over it. Some of these categories affect specific types of platforms -- e.g. dating apps that cater to the LGBTQ+ communities.
The new protections cover the following data:
Geolocation — a consumer’s precise geolocation.
Race, religion and union membership — a consumer’s racial or ethnic origin, religious or philosophical beliefs, or union membership.
Communications — the contents of a consumer’s private communications, unless the company is the intended recipient of the communication.
Genetics — a consumer’s genetic data.
Biometrics — the processing of biometric information for the purpose of uniquely identifying a consumer.
Health — personal information collected and analyzed concerning a consumer’s health.
Sexual orientation — personal information collected and analyzed concerning a consumer’s sex life or sexual orientation.
Spectrum Labs will continue to monitor the developments that affect Trust & Safety teams, and share our insights regarding their impact. If you have any questions about how we can help you with content moderation, contact our team!