Detoxify Multilingual Model

Mirror of the Detoxify multilingual model checkpoint for offline use.

Original source: unitary/detoxify

Model Info

  • File: multilingual_debiased-0b549669.ckpt
  • Size: 1061 MB
  • Model Type: multilingual

Usage

Online (Normal)

from detoxify import Detoxify

model = Detoxify('multilingual')
result = model.predict("Your text here")
print(result)

Offline Setup

  1. Download multilingual_debiased-0b549669.ckpt from this repo
  2. Place it in ~/.cache/torch/hub/checkpoints/
  3. Use normally (Detoxify will find it automatically)
from detoxify import Detoxify

# Works offline if checkpoint is in cache
model = Detoxify('multilingual', device='cpu')
result = model.predict("Your text here")

Toxicity Categories

The model predicts scores for:

  • toxicity - Overall toxicity
  • severe_toxicity - Severe toxic content
  • obscene - Obscene language
  • threat - Threatening language
  • insult - Insults
  • identity_attack - Attacks on identity

Citation

@misc{detoxify,
  title={Detoxify},
  author={Hanu, Laura and Unitary team},
  howpublished={Github. https://github.com/unitaryai/detoxify},
  year={2020}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support