Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models
Paper β’ 2406.11736 β’ Published β’ 6
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Symbol-LLM/ENVISIONS_7B_math_iter10")
model = AutoModelForCausalLM.from_pretrained("Symbol-LLM/ENVISIONS_7B_math_iter10")Paper Link: https://arxiv.org/abs/2406.11736
Code Repo: https://github.com/xufangzhi/ENVISIONS
The self-training process is based on LLaMA2-Chat model serieses and powered by ENVISIONS. The work is still under review.
Write Python code to solve the question.
The question is: <question>
The solution code is:
If you find it helpful, please kindly cite the paper.
@misc{xu2024interactive,
title={Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models},
author={Fangzhi Xu and Qiushi Sun and Kanzhi Cheng and Jun Liu and Yu Qiao and Zhiyong Wu},
year={2024},
eprint={2406.11736},
archivePrefix={arXiv},
}
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Symbol-LLM/ENVISIONS_7B_math_iter10")