Dialogue Chain-of-Thought Distillation for Commonsense-aware Conversational Agents
Paper • 2310.09343 • Published • 2
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("LangAGI-Lab/DOCTOR")
model = AutoModelForCausalLM.from_pretrained("LangAGI-Lab/DOCTOR")A dialogue commonsense reasoner that generates Chain-of-Thought knowledge in a multi-hop manner given a dialogue history. Our DOCTOR is trained with DONUT which is also available on huggingface.
For more details, you can look at our paper Dialogue Chain-of-Thought Distillation for Commonsense-aware Conversational Agents.
If you find the following model helpful, please consider citing our paper!
BibTeX:
@misc{chae2023dialogue,
title={Dialogue Chain-of-Thought Distillation for Commonsense-aware Conversational Agents},
author={Hyungjoo Chae and Yongho Song and Kai Tzu-iunn Ong and Taeyoon Kwon and Minjin Kim and Youngjae Yu and Dongha Lee and Dongyeop Kang and Jinyoung Yeo},
year={2023},
eprint={2310.09343},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="LangAGI-Lab/DOCTOR")