Language is at the core of what we do.
First, we assign a risk level to every word or phrase in our system, on a sliding scale that allows for context and nuance. Risk levels can be adjusted according to your community’s needs and tolerance. Then, words and phrases are assigned topics.
Finally, you decide what your community sees.
Online bullying doesn’t just affect children and teens. Harassment and abuse can happen to anyone, on any platform. The Pew Research Center has reported that 73% of adult internet users witnessed harassment online, and 40% experienced it themselves.
With Community Sift, you can protect your users from abusive language and maintain a safe, healthy environment for everyone.
A profanity filter doesn’t have to be a four-letter word.
With our nuanced approach to language classification, you can disallow the worst comments while still giving your users the creative license to express themselves freely. And if your product is marketed towards children, you know they’ll be protected from vulgarity 24/7.
There are two things that you should never, ever allow in your social product: grooming (the act of befriending a child and gaining their trust in order to exploit them sexually) and child sexual abuse material (CSAM). We are tackling images with CEASE, and grooming with Community Sift.
Our content filter detects grooming and filters it instantly before it even reaches its intended victim. It’s then flagged and sent to an escalation queue for immediate review, and, if necessary, action. We’ve partnered with major law-enforcement agencies to study real online conversations between child predators and their victims, so we can better and more accurately detect grooming behavior for social platforms.
As a business, you have a legal and moral obligation to prevent it from happening in your product. Community Sift has done much of this hard (and, let’s face it, heartbreaking) work for you.
Every community will have a different tolerance for sexual language. For some, only conversations about relationships or love are acceptable. Others will allow flirting and mild sexual innuendos. And some mature communities might even allow sexual dialogue between users in private chat.
Whatever your community dynamic, we designed our sexting topic to adapt to every use case.
When building a social product, one of the first things you need to decide is how you will handle PII. By law in the US, if your product is targeted at users 12 and younger, the Children’s Online Privacy Protection Act (COPPA) legally requires you to keep PII out of your product. As well, some kid-related certifications like kidSAFE Seal will require that you have effective and proven strategies for dealing with real names, email addresses, phone numbers, physical addresses, and more.
Community Sift has child-friendly settings that can help you achieve COPPA compliance.
Hate speech can include racism, sexism, and religious discrimination. And when it becomes a trend, it results in bad publicity, an unfriendly social climate, and the loss of influential members of the community.
We’ve identified hundreds of patterns associated with hate speech, and our team engages in ongoing research to find new slurs and the ever-changing idioms of hatred. By detecting and blocking hate speech, you will shape a healthy community in which all users feel welcome.
There is a lot of talk on the internet about suicide. Not all of it is genuine, so how do you know when a user truly needs help? Our team of industry veterans will work closely with you to build a build a strategy for dealing with self-harm and suicide on your platform.
We can set up triggers to alert you in real-time when a user has posted alarming content more than once. And we can even send messages with suicide hotline links to users who need support.
Internet scammers frequently use chat rooms, websites, and games to commit cyber fraud. If this happens on your platform, it can permanently erode the trust that users have in your product and brand.
With Community Sift, you can block language that is typical of scammers, including requests for usernames, passwords, and financial information, as well as links to scam sites.
We call it Unnatural Language Processing (UnLP). Our unique take on language classification, UnLP is designed to find the hidden, “unnatural” meaning buried in manipulative language.
Savvy internet users know how to bypass traditional filters, but not ours. We maintain a global list of all the ways users disguise language. Like anti-virus software, we’ve already seen it all — so your users don’t have to ever see it.
The internet is a big place, and you never know what you’re getting into when you visit an unfamiliar site. That’s why we created this topic.
By blocking links to outgoing websites, you will protect your community from unwittingly seeing risky or even illegal content. And you can prevent those same users from being redirected to dangerous phishing sites where their personal information could be compromised.