DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation
Paper • 1911.00536 • Published • 2
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
This model is a clone of SkolkovoInstitute/roberta_toxicity_classifier trained on a disjoint dataset.
While roberta_toxicity_classifier is used for evaluation of detoxification algorithms, roberta_toxicity_classifier_v1 can be used within these algorithms, as in the paper Text Detoxification using Large Pre-trained Neural Models.