metadata
license: apache-2.0
library_name: transformers
pipeline_tag: text-classification
tags:
- hallucination-detection
- text-classification
language:
- en
ANAH: Analytical Annotation of Hallucinations in Large Language Models
This page holds the InternLM2-7B model which is trained with the ANAH dataset. It is fine-tuned to annotate the hallucination in LLM's responses.
More information please refer to our project page.
๐ค How to use the model
You have to follow the prompt in our paper to annotate the hallucination.
The models follow the conversation format of InternLM2-chat, with the template protocol as:
dict(role='user', begin='<|im_start|>user
', end='<|im_end|>
'),
dict(role='assistant', begin='<|im_start|>assistant
', end='<|im_end|>
'),
๐๏ธ Citation
If you find this project useful in your research, please consider citing:
@article{ji2024anah,
title={ANAH: Analytical Annotation of Hallucinations in Large Language Models},
author={Ji, Ziwei and Gu, Yuzhe and Zhang, Wenwei and Lyu, Chengqi and Lin, Dahua and Chen, Kai},
journal={arXiv preprint arXiv:2405.20315},
year={2024}
}
Code: The source code for training and evaluating this model can be found at https://github.com/open-compass/ANAH