Frequently Asked Questions

You've got questions. We've got answers.

General Questions

What is trolling and toxicity?

Both terms are used often when discussing social products, so it’s important that we clearly define them. A troll is a user who makes a deliberately offensive or provocative online post. Toxicity refers to high-risk behaviors like extreme bullying, harassment, abusive comments, hate speech, and threats.

What is Community Sift?

Community Sift is a high-risk content detection system and moderation tool for social products.

Using a unique blend of artificial intelligence coupled with behind-the-scenes human review, Community Sift identifies and handles threatening UGC like bullying, harassment, and hate speech in real time.

Community Sift takes a 360-degree approach to classification, factoring in topic, riskiness, context, user reputation, and more to make decisions. Our reputation system considers individual user behavior over time and applies a stricter or more permissive filter based on their trust level. This contextual approach to labeling allows you tailor the system to your community, and block and allow content based on your tolerance level.

Community Sift also comes with a complete set of moderation tools, including customizable content queues, automated user actions like mute, warn, suspend, and ban, and a live chat viewer for on-the-spot moderation.

Use Community Sift to shape a safe, healthy, and ultimately engaged community of users in your social network, online game, user forum, and more!

What kinds of products is this designed for?

Community Sift was built for any product that hosts User-Generated Content, from messaging apps to social networks to online games. Our unique risk, topic, user reputation, and context-based content classifier works for any demographic and tolerance level. We’ve created settings that work for child-directed, family-friendly, general, and adult audiences.

Why does my product need a filter?

With online communities, there is always the risk of users posting content that is threatening, illegal, or dangerous. We’ve seen it happen time and time again: A social product launches without an effective content filter or moderation system, users are subjected to abuse and harassment, and the product’s reputation and userbase suffers.

Our research tells us that in most communities, about 2% of content will be high-risk. To put that in perspective, for every 1 million lines of chat, 20,000 lines will contain some kind of unwanted behavior.

Every community has a unique resilience level, largely based on demographic and product genre. Even products targeted at an adult audience have to define what kind of behavior is appropriate. And without the tools in place to reinforce community guidelines, how will you protect your users from harmful content?

Can you handle text, usernames, and images?

Yes! Community Sift can handle all three. For text, our system can classify both short text (under 1000 characters) and long text (over 1000 characters).

Usernames are different from chat, so we designed a filter specifically for them which looks at patterns in long strings of text.

For images, we’ve partnered with the world’s leading image analysis filter. We combined this cutting-edge artificial intelligence with our user reputation system and text classifier to create a tool unlike anything else on the market.

How are you different from your competitors?

Most content filters are good at finding obvious high-risk content. But users who are determined to harass, threaten, and do harm to the community have figured out how to subvert traditional filters. Unsatisfied with the current state of Natural Language Processing (NPL), we built a new kind of artificial intelligence model to detect dangerous content, called Unnatural Language Processing (UnLP). With this model, we look for the unnatural or “hidden” meaning of words. Our system is highly accurate at detecting 1337 5peAk, Unicode characters, vertical chat, and language manipulations.

Additionally, our user reputation system pays special attention to users who consistently break community guidelines. Operating under the assumption that behavior traits will repeat themselves over time, when a user breaks the rules often their chat is automatically placed in a more restricted state. This way, sneaky manipulations that wouldn’t necessarily be caught are more likely to be filtered. As users change their behavior, their chat settings become more permissive. The triggers that move users between these stats are customizable and can be tailored to a variety of communities.

Do you provide moderation? Do I still need a team of moderators?

Community Sift arms your human team with smart, purposeful automation. When you send text, usernames, or images to Community Sift, it classifies them based on risk, topic, user reputation, and context, all in real time. Predetermined settings then decide what to do with it — whether that’s filter, hash, or allow. Not only that, automatic action can be taken against users based on behavioral triggers. Content that meets certain criteria can also be routed to a queue for your moderators to review, approve, or reject based on community guidelines.

Any community that hosts UGC will require some form of human moderation. An all-human moderation solution is expensive and time-consuming. For a cost-effective and efficient option, we recommend leveraging a sophisticated and time-tested blend of automation and human review.

What options are available in your moderation tools?

Depending on your package, the Community Sift moderation tool includes a live chat viewer, customizable content queues, real-time rule changes, player profile pages with player actions, a variety of reports, the ability to add and remove users, and more.

How long will it take to train my team on the tools?

Our moderation tools were designed for ease of use. Typically it takes a few training sessions to get your team up to speed.

We provide the training, a comprehensive manual, and ongoing support to ensure that your team is comfortable using the tool.

Filter

What are Trust Levels?

Users move between Trust Levels based on their behavior in chat. Everyone starts out in a Default state, meaning they are subject to the same chat settings. Then, based on a set of predetermined triggers, they can move to a Trusted state, which opens up their chat settings, or a Not-Trusted state, which further restricts their chat.

Can my team make changes to language patterns?

Yes! Community Sift is a self-serve tool, although we are always happy to provide support, guidance, and recommendations.

What is Unnatural Language Processing?

Users who are determined to bypass content filters are savvy, creative, and will always search for ways to manipulate your system. To combat this, we invented a new way to analyze chat, called Unnatural Language Processing (uNLP). It finds buried high-risk content in l337 5p34k, mIxEd cAps, vertical chat (words/letters spread across multiple lines), Unicode, and longstringsofrandomletters.

Can you handle l33t, vertical chat, Unicode characters, language manipulations, etc?

Unnatural Language Processing is our superpower and secret weapon. Our unique approach to language processing is specifically designed to find the “unnatural,” hidden, and manipulative meanings in chat.

Using a blend of artificial intelligence and ongoing human assessment, our system automatically looks for hidden letters, numbers, and even characters from other languages. We have built out thousands of unique rules to account for subversive manipulations like the ones below:

you have a nice pASS

fu(k

f u c k

son of a b3 3ch

a88 hole

fw.oak off

How do you classify language?

Community Sift looks at a variety of features — including riskiness, topic, user reputation, and context — to decide if a word or phrase passes or fails your community guidelines. We’ve been refining this process for over five years, and have spent millions of dollars to ensure that it’s the most accurate content classification system on the market.

What is the user reputation system?

With our reputation system, users move from a Default state, to Trusted or Not-Trusted, and back. The playing field is level: everyone starts out in a Default state, and as they break or follow community guidelines, their social chat becomes more or less permissive — all based on unique settings that you choose.

Can you handle flooding?

Yes. We can handle both duplicate message flooding (sending the exact same message multiple times in a row) and velocity-based flooding (sending messages really fast).

Can I create different settings for different rooms, servers, features, etc?

Absolutely. You can create different guidelines for different features in your product.

Can I create different settings for different age groups?

Yes. Community Sift was designed to be flexible. Many of our clients have under-13 and over-13 users that interact with each other, each with their own set of unique guidelines.

What languages do you support?

We currently support the following languages:

  • English
  • Spanish
  • Portuguese
  • French
  • Russian
  • Italian
  • Turkish
  • German
  • Arabic
  • Chinese (Simplified & Traditional)
  • Dutch
  • Finnish
  • Japanese
  • Vietnamese
  • Korean
  • Polish
  • Indonesian

We’re in the process of building out more languages and will update as they progress. If the language you need isn’t on this list, just let us know — we’ve created a process that can rapidly deploy a new language within a matter of weeks.

Compliance & Kids

What does the kidSAFE seal on your website mean?

The kidSAFE Seal Program is an independent safety certification service and seal-of-approval program designed exclusively for children-friendly websites and technologies, including online game sites, educational services, virtual worlds, social networks, mobile apps, tablet devices, connected toys, and other similar online and interactive services. kidSAFE also audits and certifies the practices of third party vendors that service this industry.

Community Sift underwent rigorous testing and independent review to achieve kidSAFE certification.

Will this help with COPPA compliance and GDPR?

Yes. We designed a set of rules for Personally Identifiable Information specifically with COPPA and GDPR requirements in mind. Learn more about COPPA and GDPR here.

We’ve partnered with multiple kid’s products who have achieved COPPA compliance and earned their kidSAFE seal.

We still recommend a COPPA or GDPR review by a 3rd party to look at your privacy statement, account creation process, and internal storage of data.

Is this safe for kids?

Absolutely. We believe in a world free of online bullying, harassment, and child exploitation. Helping companies provide kids with safe online spaces is core to our vision.

Technical

What is your average latency?

Between 20-30 ms.

Can your service scale with my product as it grows?

In the first quarter of 2017, we have already processed over 48,000,000,000 messages, more than 100,000,000 usernames, and over 16,000,000 images.

We are constantly improving and strengthening our backend architecture to ensure that our system can handle these kinds of numbers — and much more.

Can I try Community Sift before making a decision?

Community Sift is AI-driven, so it’s crucial that we tune it based on your community’s unique profile, use case, and expected results.

If you would like to “test drive” the tool, please contact us to schedule a demo. We would love to discuss discovery and testing options for you and your team.

Get in touch to learn more.

Contact Us

Hello! Send us your question, and we'll get back to you as soon as possible.

Start typing and press Enter to search