--- license: apache-2.0 datasets: - wangrui6/Zhihu-KOL language: - zh base_model: - lucky2me/Dorami --- # Dorami-Instruct Dorami-Instruct is a Supervised Fine-tuning(SFT) model based on the pretrained model lucky2me/Dorami ## Model description ### Training data - [wangrui6/Zhihu-KOL](https://huggingface.co/datasets/wangrui6/Zhihu-KOL) ### Training code - [dorami](https://github.com/6zeus/dorami.git) ## How to use ### 1. Download model from Hugging Face Hub to local ``` git lfs install git clone https://huggingface.co/lucky2me/Dorami-Instruct ``` ### 2. Use the model downloaded above ```python from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig model_path = "The path of the model downloaded above" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained(model_path) prompt="fill in any prompt you like." inputs = tokenizer(prompt, return_tensors="pt") generation_config = GenerationConfig(max_new_tokens=64, do_sample=True, top_k=2, eos_token_id=model.config.eos_token_id) outputs = model.generate(**inputs, generation_config=generation_config) decoded_text = tokenizer.batch_decode(outputs, skip_special_tokens=True) print(decoded_text) ```