Health and Toxicity of Online Communities
In collaboration with the company TwoHat, this project aims to improve the management of conversation tools and the toxicity assessment metrics in the context of the system through the development of innovative algorithms.
How can artificial intelligence address the toxicity of online communities?
Online communities abound today on social networking sites, on real-world community websites involving schools or clubs, on web forums, on video game discussion boards, and even on the pages of comments on news sites and blogs. Some of these communities are “healthy” and involve polite discussion among respectful members, but others are “toxic” and lead to fierce fighting and cyberbullying, involve fraud, or even encourage suicide, radicalization or sexual predation and grooming of minors.
Detecting toxic messages and toxic users is a major challenge, in part because they are conflicting users who are actively trying to bypass or detect software and filters. Additionally, while much research in the literature has studied online communities (for example in normalizing text to correct misspelled words, in sensing feelings to infer users’ moods, or in user modeling to recognize different user personalities), most of these studies have assumed that users are collaborating rather than deliberately trying to hijack the software.
The private company TwoHat uses the Community Sift software, which assists community moderators in identifying messages in online conversations. In this research project, we are collaborating with TwoHat to achieve the following five general objectives:
1
Explore improvements to conversational management tools and toxicity assessment metrics in the context of the system;
2
Research new methodologies for the detection of toxicity in online conversations;
3
Develop innovative algorithms to collect documentary evidence as part of a consistent assessment and prediction;
4
Develop real-time implementations of these methodologies that can handle the massive flow of data from online conversations;
5
Study the nature of toxic behaviors, their impacts on users and communities, and the mechanisms to curb them.
Partner Organizations
Research Team
IID Principal Investigator of this project
IID Co-Investigators participating in the project
Project Team
Principal Investigator: Richard Khoury (Université Laval)
Co-Investigators: Christian Gagné (Université Laval), Luc Lamontagne (Université Laval), François Laviolette (Université Laval), Mario Marchand (Université Laval) and Sehl Mellouli (Université Laval).
Project Funding: 2017-2022
Let’s keep in touch!
Would you like to be informed about IID news and activities? Subscribe now to our monthly newsletter.