Jonathan Freger is cofounder and CTO of WebPurify, leading content moderation service.
We live in an era where user loyalty and trust are primary drivers for a business’ success, but that trust can quickly be eroded for marginalized groups, including members of LGBTQIA+ communities, as they experience high levels of bullying, hate and censorship online.
The LGBTQIA+ community has the fastest-growing “minority” segment spending power at $1.4 trillion yearly as of 2021, making them a key demographic to retain and attract. But it’s not as simple as waving a magic wand and creating an online community that is suddenly inclusive to all groups—it takes flexibility and constant awareness of cultural changes and sensitivities for diverse groups.
Many brands may not even recognize how many of their online spaces require this kind of awareness and conscious design for inclusivity. Spaces vulnerable to harmful or non-inclusive content can include everything from the review comments for products on retail websites, user profiles across any kind of B2C or B2B software or tech product and, of course, the content and social media presence of any brand.
When it comes to empowering inclusivity in these spaces, there are a few key rules that will allow brands to better protect their online communities from harmful content and design for inclusivity.
Creating Safety Online Through Product Design
Understanding the many facets of identity is the first step to designing an inclusive online space for all users. Businesses need to avoid dichotomous thinking when looking at gender identities and their place on online platforms and user account page design.
There are many gender identities, including transgender, gender neutral and non-binary, that impact the way users choose to self-identify. Certain elements of a channel or profile require a high level of awareness from the teams that design them on how identity can impact user experience and safety. Social media platforms, such as dating apps, often have fields that require pronouns, sexual orientation or gender to be publicly displayed on the user’s account.
To make all feel welcome, identifying fields should be left optional. Having limited choices for users to select from, or an open text box that can lead to hateful or inappropriate text being written, can cause unnecessary strife for marginalized groups.
Privacy is another major pillar of creating safety by design. Platforms should prioritize strong privacy controls, data encryption and transparent data-handling practices to safeguard user information. LGBTQIA+ users may face significant risks if their personal information is exposed, including LGBTQIA+ youth, who are at greater risk of unwanted attention when their identity is open to the public.
To help create a safe space, businesses can implement user report mechanisms and feedback from LGBTQIA+ communities as products are being developed, or changes are being made to the user experience.
Creating Equitable Rules Throughout Platforms
As businesses seek to foster a secure online environment, it is also essential that they design policies with inclusivity in mind and not based on a binary view of gender.
For example, the Free the Nipple movement was a catalyst in the way businesses looked at how nudity could be displayed online and encouraged them to consider the ways in which these policies unfairly targeted marginalized communities, including transgender users.
The constant evolution of gender identities, and the complexity of enforcing policies at scale, requires businesses and CTOs to work in tandem with diversity, equity and inclusion departments—as well as users of their online communities—to solicit critical feedback about how to create and enforce policies that don’t alienate members of the LGBTQIA+ community or others.
AI Vs. Humans—Both Are Needed For Mitigating Bias
Automation through artificial intelligence (AI) plays a critical role in moderating content on platforms that deal with a high volume of user-generated content, such as large social media platforms. However, skeptics question whether AI removes bias or perpetuates it. Algorithms built and trained on biased data inevitably have bias embedded within them, so we cannot leave it to AI alone to make unilateral decisions on what’s inclusive.
While AI can detect hate speech at a large volume by scanning vast amounts of user-generated content and filtering out egregious violations, humans must continue to provide a second layer of review on content to provide additional contextual understanding.
A good example is words or phrases that are used as sexual identifiers that can also be used in negative, hateful ways. The word “gay” used as a descriptor for someone’s identity should not be taken down, but derogatory phrases using the word must be removed.
Companies, too, need to be wary of over-moderating users because it can be seen as censorship toward groups when they’re looking to express themselves through their identity. In 2018, members of the LGBTQIA+ community claimed they were being censored on Instagram, leaving these users feeling silenced and unwelcome on the platform.
All in all, customers are a business’s greatest asset. With nearly 6 in 10 of customers stating they won’t return after a bad customer service experience, brands of all kinds must be able to design with inclusivity in mind, recognize bias on their online platforms and spaces and remove abuse quickly to foster diverse communities.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Read the full article here