Behind the scenes: what is contextuality in moderation?

Behind the scenes: what is contextuality in moderation?

Do you get the impression that online hatred isn't decreasing, despite advancements in technology? After a terrorist attack, why do we still see inappropriate reactions, like laughing emojis or celebratory comments? This isn't just a technological failure—it's a failure to understand context.

The challenge of contextual understanding

In the digital world, interpreting the context behind user interactions is crucial, yet complex. The same emoji or phrase can have multiple meanings depending on where and how it's used. For example, a champagne emoji posted under a celebratory post is fitting: but the same emoji under news of a tragic event is insensitive and offensive.

At Bodyguard, we tackle the challenge head-on with our internal AI engine, built around the most powerful tech algorithms with the help of our in-house linguists, now enhanced by the power of Large Language Models (LLMs). Our approach to contextuality in moderation involves several key components:

  1. Environmental context: Our system analyzes the type of content a comment is posted on. It understands that content can be neutral, positive or inappropriate, but it goes deeper than that. We assess the topics discussed, the reputation of entities involved, and the sensitivity of the subject matter—such as whether it is about individuals who have been harmed or killed.

  2. Recipient context: We assess who the comment is directed at. For instance, a seemingly harmless comment might be problematic if it's directed at a vulnerable individual or group.

  3. Temporal context: Recognizing the timing of comments is crucial. A comment might be harmless in one instance, but becomes problematic when posted during a sensitive event, brand crisis, etc.

  4. Cultural and linguistic nuances: By leveraging expert linguists and cultural analysts, our AI understands the subtleties of different languages and cultures, ensuring moderation is both accurate and sensitive.

  5. Customer context: We consider who receives the comment, focusing on our customer's industry, whether they are facing a crisis, and other specific contexts that may affect how content should be moderated.

The role of Bodyguard’s handmade technology

Our handmade technology is designed to replicate the human thought process in content review, ensuring nuanced and empathetic decision-making. This is where our algorithms, which are developed in collaboration with linguists and developers, come into play. These algorithms are continually updated to adapt to new forms of online toxicity and cultural shifts.

Leveraging Large Language Models (LLMs)

With the integration of LLMs, Bodyguard's system goes beyond simple keyword detection. LLMs give our AI the ability to comprehend complex sentences, detect subtle nuances, and understand the broader context of discussions. This makes our moderation more precise and adaptable to the ever-evolving online landscape.

Real-life scenarios: contextual moderation in action

Social Media: detecting inappropriate reactions

Consider a scam post in the luxury brand industry disguised as a giveaway offering high-end watches, which includes comments like "Enter to win a luxury watch", followed by a winking emoji. Bodyguard's AI recognizes the deceptive intent by analyzing the context and patterns of such posts, to ensure these scams are promptly flagged and removed.

Our AI system can also analyze user images to detect visual cues associated with scams, enhancing our ability to identify and prevent scam attempts, even when the text alone seems benign.

World events and contextual understanding

Understanding the context of a post also means being aware of current global events. For instance, during heightened tensions in the Middle East, a comment like "We are tired of them", could be aimed at specific ethnic or religious groups, which needs a sensitive and informed approach to moderation. Bodyguard's AI is trained to stay up-to-date with global events, ensuring that moderation decisions are informed by the current socio-political climate.

Contextual sensitivity in cultural comments

As an example of contextual sensitivity in cultural comments, consider a post featuring a black model as the face of a new campaign, and a comment like "I don’t really like the color of the model." Bodyguard’s AI assesses the broader conversation and cultural sensitivities to determine the appropriateness of such comments, and make sure that harmful biases are addressed.

A fashion brand’s cultural misstep

Imagine a high-end fashion brand launching a new clothing line which is inspired by traditional cultural clothes. The campaign is beautifully shot and marketed globally: and comments start flooding the brand’s social media platforms. Among the sea of reactions is a comment in French: "C'est du vol pur et simple," which translates to "This is pure theft."

At first glance, the comment might seem like a critique of the pricing or the design. But in this context, "vol" (theft), is a powerful accusation of cultural appropriation, and implies that the brand is exploiting a culture without proper respect or acknowledgment.

Traditional moderation tools might miss the deeper contextual significance, but Bodyguard’s AI is trained to understand these nuances and quickly flags the comment. This immediate response allows any brand or company to address the issue proactively.

The future of contextual moderation

The journey to effective online moderation is ever-evolving, and implementing contextuality in moderation doesn't just mitigate harm—it fosters a safer, more respectful online community. Users can engage authentically without fear of harassment, and brands can uphold their reputation and regulatory compliance.

At Bodyguard, our commitment to contextual understanding, powered by cutting-edge AI and LLMs, ensures we lead the charge in creating safer, more inclusive online communities. We understand context because it's not just about what is said, but how, where, and why it's said.

We’ve got your back!

Maryline Watrin

Responsable marketing chez Finnovatix | Master 2 en Négociation des Projets Commerciaux

2w

Charles, votre approche innovante et contextuelle de la modération montre à quel point Bodyguard est engagé à créer un espace en ligne plus sûr et plus inclusif . 🌟

Like
Reply
Victor Carraz

COO & Cofounder @ITSharkz - I help scale up with IT experts

3w

Charles, votre engagement à aller au-delà des simples filtres de mots-clés avec une analyse contextuelle approfondie est ce qui distingue réellement Bodyguard dans la lutte contre le contenu toxique en ligne 👏.

Like
Reply
Ami Kumar

AI powered Trust & Safety | IVLP Fellow | Entrepreneur and Public Speaker passionate about promoting child online safety. Academia - Microsystem Technics (Germany)

1mo

Super cool approach!

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics