How can artificial intelligence address the toxicity of online communities?
Online communities abound today on social networking sites, on real-world community websites involving schools or clubs, on web forums, on video game discussion boards, and even on the pages of comments on news sites and blogs. Some of these communities are “healthy” and involve polite discussion among respectful members, but others are “toxic” and lead to fierce fighting and cyberbullying, involve fraud, or even encourage suicide, radicalization or sexual predation and grooming of minors.
Detecting toxic messages and toxic users is a major challenge, in part because they are conflicting users who are actively trying to bypass or detect software and filters. Additionally, while much research in the literature has studied online communities (for example in normalizing text to correct misspelled words, in sensing feelings to infer users’ moods, or in user modeling to recognize different user personalities), most of these studies have assumed that users are collaborating rather than deliberately trying to hijack the software.