File size: 2,559 Bytes
cbc9684
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
language: ar
license: mit
tags:
- arabic
- hate-speech-detection
- bert
- text-classification
- pytorch
datasets:
- arabic-levantine-hate-speech-detection
metrics:
- accuracy
- f1
model-index:
- name: arabic-bert-hate-speech-detection
  results:
  - task:
      type: text-classification
      name: Hate Speech Detection
    dataset:
      type: arabic-levantine-hate-speech-detection
      name: Arabic Levantine Hate Speech Detection
    metrics:
    - type: accuracy
      value: 0.845
      name: Accuracy
    - type: f1
      value: 0.84
      name: F1 Score
---

# Arabic BERT Hate Speech Detection

This model is a fine-tuned version of `aubmindlab/bert-base-arabertv2` for Arabic hate speech detection.

## Model Description

- **Base Model**: aubmindlab/bert-base-arabertv2
- **Task**: Binary text classification (Normal vs Hate Speech)
- **Language**: Arabic
- **Accuracy**: 84.5%

## Usage

```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

# Load model and tokenizer
model_name = "Ibracadabra13/arabic-bert-hate-speech-detection"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)

# Function to predict hate speech
def predict_hate_speech(text):
    inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=128)
    
    with torch.no_grad():
        outputs = model(**inputs)
        predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
        predicted_class = torch.argmax(predictions, dim=-1).item()
        confidence = predictions[0][predicted_class].item()
    
    label_map = {0: 'Normal', 1: 'Hate Speech'}
    return {
        'prediction': label_map[predicted_class],
        'confidence': confidence,
        'is_hate_speech': predicted_class == 1
    }

# Example usage
result = predict_hate_speech("أنت حيوان حقير")
print(result)  # {'prediction': 'Hate Speech', 'confidence': 0.97, 'is_hate_speech': True}
```

## Training Details

- **Training Data**: Arabic Levantine Hate Speech Detection Dataset
- **Training Method**: Fine-tuning with manual training loop
- **Epochs**: 2
- **Batch Size**: 4
- **Learning Rate**: 2e-5
- **Optimizer**: AdamW

## Performance

- **Accuracy**: 84.5%
- **Normal Text**: 83% precision, 96% recall
- **Hate Speech**: 90% precision, 65% recall

## Limitations

This model is trained on a specific dataset and may not generalize well to all Arabic dialects or contexts. Use with caution in production environments.