Prevent Disruption

Empower your product to prevent disruptive behavior, proactively, at the first touchpoint, before a user enters your community. Increase user engagement and healthy interactions on your platform.

Stand-Alone Solution

Use the only stand-alone solution in your communities without unnecessary cost. Reduce the loss of revenue associated with toxic users while improving the mental health of moderators and community managers.

Enterprise Ready

Enterprise-level integration, 24/7 support, service level agreement. Regulatory compliance with GDPR, COPPA, and DCMS. Protect your brand, product, and your user data from the associated legal ramifications.

Platform Support

Integrate with your product seamlessly using our well-documented and easily customizable API. Understand precisely why Samurai is making a decision through its nuanced categories and detailed output.


Protect Your Communities

Free Developer API
Enterprise-Level Protection

Our Unique Approach

Samurai takes a Neuro-Symbolic approach to detect hidden meanings in usernames. Identifying individual words within the username and generating as many variants as possible. We use these techniques to find the original attempt at creating a disruptive username. This method allows us to categorize different types of subversive language in a user's online identity.

% Increase in Disruptive Behavior Based on Username

Detailed output

Decomposition of the username - DigBildo - Big D*ldo
Categories - Offensive, Sexual
Toxic Element - D*ldo

Categories

Inappropriate - P41nbowRub3s - Rainbow Pubes
Offensive - D34thToFurr1es - Death to furries
Profanity - FolyHuck - Holy F*ck
Sexual - Aneed Morehead - I need more head

Simple Ideas Require Complex Solutions

Filter or keyword-based methods are inefficient & inaccurate. Leetspeak, numeric, and character workarounds require a nuanced understanding of composition, Samurai protects your communities from these adversarial attacks and more "creative" approaches.


The Next Step in Trust and Safety

The Samurai Cyber Guardian

The first intelligence built for autonomous cyberviolence intervention, preventing the most disruptive and harmful behavior/communication. Samurai can read and respond to cyberviolence, understanding the context, reasoning, and intent behind nuanced forms of language. Samurai's Cyber Guardian decreased disruptive behavior by 45% in one of the most infamously toxic communities. Although this community was moderated, Samurai accomplished this feat without the use of moderators and a staggering 76% of cyberbullies who interacted with Samurai voluntarily changed their tone.

The First Truly Proactive Response

98% Precision Detection of Personal Attacks

Samurai provides the transparency and control of AI which is necessary to be trusted to act autonomously. It detects personal attacks with a rate of 2% false positives.

Why Neuro-Symbolic AI?

Neuro-Symbolic AI requires a fraction of the data compared to deep learning, it is unbiased, so you can track and understand precisely why it's making a decision. As it detects cyberviolence, it can take action, intervene autonomously, or even notify moderators before the damage is done.

Profanity and Personal Attacks are not Mutually Exclusive

Samurai understands nuanced forms of communication from context to intent. Our language models manage the distinction between profanity and personal attacks, it also understands the difference between language targeted at first party or third party. Standard models for precision detection include profanity, sexual remarks, rejection, personal attacks, threats, and sexual harassment.

Blackmail Detection Example:

Send me more nudes or I'll publish the ones I already have

Samurai Cyber Guardian Detection
Blackmail threat to reveal information
Sexual harassment attempt to solicit intimate photographs

Girls like you like it rough

Username

Hi friend, it appears you are violating our community standards as your message is highly sexual and could be considered offensive or threatening to others. Would you care to revise your message, cancel, or do you prefer to send it anyway?

Samurai Cyber Guardian

Message Deleted

Username

Thank you for helping us maintain a healthy community.

Samurai Cyber Guardian

76% of cyberbullies voluntarily change their tone after being educated and redirected by Samurai's Cyber Guardian

Resistance to Adversarial Attacks

Symbols, numbers, boundaries, special characters, abbreviations, or combinations are often caught by Samurai. When methods for resistance to adversarial attacks change, our AI adapts accordingly.

Every Community has Different Needs

Every community has different needs, Samurai is completely customizable to suit those needs. Set your own parameters for handling profanity, violent language detection and interventions. We can even set up new language models specific to the community.

Educate and Redirect

Samurai's precision has enabled our AI to act autonomously in various ways. Samurai can block messages from being sent, issue kind, or normative messaging while educating the user on why their message might be considered offensive while prompting the user to revise. Samurai can also proactively & autonomously intervene to cool down a user.

Identify the Bad Actors

Catch negative behavior in the moment, intervene, de-escalate, encourage and/or reward positive redirection. Identify the bad actors and place them on a path to redemption. Recognize the users who intervene on their own, those who set an example for the community by exhibiting the behavior of a community ambassador like a true samurai.

Empower Moderators

Samurai can turn moderators into superheroes, with 98% precision, Samurai reduces moderator load by over 90%. Moderators can proactively prioritize imminent crisis before the damage is done.

Community, Brand, and Customer Insight

See exactly how Samurai is not only impacting but improving your communities and your business. See real-time reporting on changes in toxicity, churn, and retention.


Samurai

About Us

There was a time when most people believed the social internet was nothing but full of possibility. We have long taken a stand that a strong vision, ethics, and approach to technology is necessary to protect children and online communities in a world that has the potential to become very dangerous.

We are a committed collective intelligence of experts across domains creating a new type of research organization that builds AI products with the highest integrity.

We combine rigor of deep, peer-reviewed scientific and academic research with the speed and agility of a decentralized private technology company.

Applied Research and Development

Private and public sector organizations work with us in developing foundational research and applying it to create high performance violence detection and intervention intelligence and tools. Every community and environment is unique. And the precision, flexibility and transparency of our language engine enables us to solve for a diversity of contexts and use cases.

NEW Paper on how personal attacks decrease user activity in social networking platforms. Read more

NEW See evidence from our work on Reddit proving we can reduce violence in online communities. Read more

NEW Read our paper on improving classifier training efficiency for automatic cyberbullying detection. Read more

Learn about our work with Hate Lab and Cardiff University. Read more

See how we are helping INACH on the hidden corners of the dark web. Read more

Read our published research tackling the problem of sarcasm detection with the use of machine learning and knowledge engineering techniques. Read more