Please use this identifier to cite or link to this item:
http://localhost:8081/jspui/handle/123456789/20241| Title: | BERT BASED MODEL FOR HATE SPEECH DETECTION |
| Authors: | Kumar, Paramanand |
| Issue Date: | May-2022 |
| Publisher: | IIT, Roorkee |
| Abstract: | This study explores a deep multi-class classifier utilizing the existing pre-trained Bidirectional Encoder Representations from Transformers (BERT) for classifying hateful speech statements. Our proposed framework comprises five main segments: (1) BERTbase is used to generate word embeddings and represent the text data into meaningful vectors. (2) The vectors from the last four hidden layers of BERT are then fed into Bidirectional LSTMs to extract more contextual features. (3) These output vectors representation are then fed into separate CNNs to capture local patterns. (4) Then, using the Gating mechanism, the weighted sum of CNN and BiLSTM outputs, a resulting output is produced (5) All the outputs after different gating operations are then concatenated and passed to a fully connected layer for classification. The performance of the proposed method framework is examined using two widely known datasets, and the results show that the suggested framework is better able to classify hate speech and improve scores on all metrics than other BERT-based hate speech detection systems. |
| URI: | http://localhost:8081/jspui/handle/123456789/20241 |
| Research Supervisor/ Guide: | Roy, Partha Pratim |
| metadata.dc.type: | Dissertations |
| Appears in Collections: | MASTERS' THESES (CSE) |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| 20535019_Paramanand Kumar.pdf | 1.55 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
