Detoxify Multilingual Model
Mirror of the Detoxify multilingual model checkpoint for offline use.
Original source: unitary/detoxify
Model Info
- File:
multilingual_debiased-0b549669.ckpt - Size: 1061 MB
- Model Type: multilingual
Usage
Online (Normal)
from detoxify import Detoxify
model = Detoxify('multilingual')
result = model.predict("Your text here")
print(result)
Offline Setup
- Download
multilingual_debiased-0b549669.ckptfrom this repo - Place it in
~/.cache/torch/hub/checkpoints/ - Use normally (Detoxify will find it automatically)
from detoxify import Detoxify
# Works offline if checkpoint is in cache
model = Detoxify('multilingual', device='cpu')
result = model.predict("Your text here")
Toxicity Categories
The model predicts scores for:
toxicity- Overall toxicitysevere_toxicity- Severe toxic contentobscene- Obscene languagethreat- Threatening languageinsult- Insultsidentity_attack- Attacks on identity
Citation
@misc{detoxify,
title={Detoxify},
author={Hanu, Laura and Unitary team},
howpublished={Github. https://github.com/unitaryai/detoxify},
year={2020}
}