Algorithms that take into account the polarity of emotions could anticipate toxic conversations according to Richard Khoury
In an article in prepublication on Arxiv.com, Professor Richard Khoury, as well as Éloi Brassard-Gourdeau, demonstrate that the positive or negative nature of the words used in a conversation as well as the intensity of this polarity lead to an improvement of the algorithms used to predict whether a conversation is likely to deteriorate.
A tool to spot the imminence of toxic comments online
In the movie Minority Report, a specialized police force apprehends criminals based on foreknowledge of wrongdoings provided by psychics called precogs. One cannot help but think that these precogs would be very useful in preventing conversations from escalating and leading to an onset of toxic comments on the web. This is because, for now, moderators step in after the fact, once offensive comments have already been posted.
Éloi Brassard-Gourdeau and Richard Khoury, from the Department of Computer Science and Software Engineering, believe there is room for improvement. In an article in prepublication on Arxiv.com, they demonstrate that the positive or negative nature of the words used in a conversation, as well as the intensity of that polarity, lead to an improvement of the algorithms used to predict whether a conversation is likely to deteriorate.
Currently, moderators are being notified that a toxic conversation has taken place by users who file a complaint or by algorithms that are relatively simple, but not always reliable. “These algorithms detect keywords which may be offensive, but which are not necessarily offensive in a given context,” Richard Khoury points out. “Also, it would be best to anticipate that a conversation has taken a turn for the worse before it becomes toxic.”
To achieve this, Éloi Brassard-Gourdeau and Professor Khoury have perfected predictive algorithms using emotion analysis, an approach that consists in assigning a positive or negative polarity score to a word, sentence or conversation. The first part of the work was carried out using 1270 pairs of conversations that took place between users of the Editing section of the Wikipedia site. Half of these exchanges had remained polite while the other half had degenerated.
Let’s keep in touch!
Would you like to be informed about IID news and activities? Subscribe now to our monthly newsletter.