Download PDFOpen PDF in browser

Toxic Speech Detection

EasyChair Preprint no. 10192

12 pagesDate: May 17, 2023

Abstract

People on the internet nowadays frequently post comments or write statements that are quite nasty and deemed abusive language on social media platforms and in meetings, for example. Detection of the system is extremely challenging to solve manually, so we must design an automatic approach. Once restricted to verbal communication, hatred has rapidly spread via the Internet As more people have access to social media and online discussion groups, they are using them to disseminate hateful messages. Numerous nations have passed laws to curb the dissemination of hate speech online. In light of the platforms' repeated failures to curb the spread of hate speech, they hold the companies responsible. However, as internet information grows, so does the propagation of hate speech. Human monitoring of hate speech on online platforms is not only impractical but also prohibitively expensive and time-consuming because of the massive amounts of data that are involved. Therefore, it is essential to automatically monitor online user content for hate speech and remove it from online media. Since many contemporary methods are not easily interpreted, it can be puzzling to learn why the systems have reached their verdicts. This article suggests using the Support Vector Machine (SVM) and the Naive Bayes algorithms to automatically detect hate statements in online discussions.

Keyphrases: Classification, Decision Tree, F1Score, machine learning, SVM

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:10192,
  author = {Samayamanthula Lokesh Kumar and Tummalapalli Sree Rama Vijay and M Baskar},
  title = {Toxic Speech Detection},
  howpublished = {EasyChair Preprint no. 10192},

  year = {EasyChair, 2023}}
Download PDFOpen PDF in browser