Bodyguard interview

Brands: How to Protect your Brand Reputation from Online Toxicity on your Social Networks?

Published: 15/02/2023

In a digital world where user-generated content is the new standard online toxicity is on the rise, businesses have a vested interest in protecting their online community. Every business makes this a top priority in their physical locations so that users can have a positive experience in a safe place where they can express themselves freely. But it is different in online spaces.

In general, the lifespan of content on social networks is very short (around 18 minutes for a tweet). So without automatic moderation in place, toxicity can become devastating. The damage is done from the moment it is published.

Toxic content is increasingly gaining ground: not only does it pollute social interactions, but it also affects communications on brand pages and channels, such as customer forums, social networks, and online discussion tools.

For Bodyguard.ai, a contextual and autonomous moderation solution, online toxicity includes two distinct types of content:

  • Hateful comments: insults, threats, hate, racism, LBGT+ phobia, sexual and moral harassment, body-shaming and misogyny
  • Undesirable comments: content that alters the quality of interactions such as spam, scam*, scams, advertisements, trolling, and links

    *Scam: very similar to spam, scamming is a ploy or trick to deceive someone and make an illegal profit, usually monetary

Are the moderation rules of social platforms doing enough? What are the impacts of online toxicity on your online community and your business? How can you fight online toxicity effectively and protect free speech?

Jean de Salins, Head of Brands and Media at Bodyguard.ai, an automatic moderation solution that blends human intelligence and advanced automated technology to detect and remove toxic comments and behaviour in real-time, answered our questions.

Are the moderation rules of social platforms sufficient nowadays?

You do not make a first impression, and the same is true on the Internet. According to a Businesswire survey, 40% of users leave a platform after their first encounter with offensive words. They are also more likely to communicate their bad experience to other users, which can lead to serious and potentially irreparable damage to brand safety.

Another statistic proves that social platforms still have a lot of work to do on moderation issues: only 62.5% of hateful content is removed from social networks, according to the European Commission. In other words, a large volume of unmoderated content can easily have an impact on users, but also on the e-reputation and brand image of companies.

The machine learning used by leading social platforms such as Meta or Instagram has an error rate that can easily reach 20-40%, whereas it is now technologically possible, thanks to the intelligent moderation of Bodyguard.ai, to reduce this rate to around 2-3%.

The type of moderation currently used represents a real obstacle to free speech, as it is not efficient enough to detect all linguistic subtleties and can easily amount to censorship when the algorithms overreact.

What are the impacts on your community and your brand image?

The challenges of e-reputation are strategic for brands and media. It gives consumers confidence and encourages purchases. According to a 2020 Statista survey, people aged 25 to 34 make up a quarter of all Facebook users, and 73% of respondents answered they use social media every day - illustrating the importance of this channel for brand content and engagement as well as social interactions. However, social media platforms are most associated with an unwelcome friend and follow request (85%), and bullying/trolling at 84% in the United Kingdom. Grabbing people’s attention or, on the contrary, losing it can happen in a matter of seconds! You might only get one chance to make a first impression.

E-reputation is much more than just a marketing issue. If User Generated Content (UGC) is over-moderated, brands risk being accused of being overly judgmental or of failing to understand their audience by limiting freedom of speech and expression.

If, on the other hand, brands do not have moderation rules, chat rooms can become a place where online hate is freely expressed. Without an effective moderation tool, there is a risk that harassment, insults, racism, sexism, etc. will flourish. These comments may be aimed directly at the brand, but also at people who subscribe to its content on social networks and its employees.

As you will have understood, guaranteeing the protection of its community is essential. If brands do not intervene, it runs the risk of seeing their discussion spaces become lawless zones. If a crisis occurs, avoiding the repercussions will be difficult, if not impossible. It may affect the mental health of its community managers and subscribers. It will also be associated with the toxic behaviours present in its community. There is a real risk of image damage.

What advice would you give to companies wishing to moderate their content without hindering freedom of expression?

5 concrete actions to act now against toxic content:

  • Set community guidelines: have a clear message on your channels at the gatehouse saying that aggressive, hateful or discriminatory language will not be tolerated on your pages.
  • Training and coaching: ensure your team is trained on techniques to cope, both personally and in a professional environment. Dealing with hateful content on a regular basis is psychologically draining. And make it clear that as a business you will support them.
  • Make use of tools: ensure you build a suite of tools to help prioritise, moderate and support your brand communications.
  • Speak up and work together: call out negative behaviours and share best practices and knowledge with your peers, even your competitors! We would love to support an industry standard for combating online toxicity.
  • Remember that the internet can be a mirror:

What brands need is a technology solution based on the right balance between machine and human, between algorithms and linguistics, that analyses and understands the context of online discussions in real-time, with a cultural and linguistic approach. Contextual and autonomous moderation is the only way to provide brands and online communities with a successful experience, in a safe place where freedom of expression is protected.

At Bodyguard.ai, we would love to see a commitment to online safety become a true industry standard or quality indicator for companies and brands. Online toxicity can now be detected and eliminated to protect communities and brands before they suffer irreversible damage.

Jean de Salins is Head of Sales for the media and brand sector at Bodyguard. He has been working in the digital world of media and brands for the past 10 years, with a long experience at Google helping large French companies to deploy their online growth strategy. Having experienced the ins and outs of web 2.0, his priority is to provide unstoppable protection to individuals, communities and audiences in the face of the emergence of new social networks and the explosion of online hate.

Jean De Salins

Jean De Salins, Head of Sales for the media and brand @ Bodyguard

Guest article. Appvizer occasionally invites experts. They are independent from Appvizer’s Editorial Committee and share their personal opinion and experience. The content of this article is written by the guest author and does not reflect the views of Appvizer.
Transparency is an essential value for Appvizer. As a media, we strive to provide readers with useful quality content while allowing Appvizer to earn revenue from this content. Thus, we invite you to discover our compensation system.   Learn more

Best tools for you