ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language Models
Paper β’ 2407.04693 β’ Published β’ 3
How to use opencompass/anah-v2 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="opencompass/anah-v2", trust_remote_code=True) # Load model directly
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("opencompass/anah-v2", trust_remote_code=True, dtype="auto")# Load model directly
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("opencompass/anah-v2", trust_remote_code=True, dtype="auto")This page holds the ANAH-v2 model which is trained based on the InternLM2-7B. It is fine-tuned to annotate the hallucination in LLMs' responses.
More information please refer to our project page.
You have to follow the prompt in our paper to annotate the hallucination and you can find it easily here.
We also provide some examples of using the ANAH-v2 annotator, which you can refer to for annotating your content.
If you find this project useful in your research, please consider citing:
@article{gu2024anah,
title={ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language Models},
author={Gu, Yuzhe and Ji, Ziwei and Zhang, Wenwei and Lyu, Chengqi and Lin, Dahua and Chen, Kai},
journal={arXiv preprint arXiv:2407.04693},
year={2024}
}
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="opencompass/anah-v2", trust_remote_code=True)