--- language: - ko pipeline_tag: text-generation tags: - finetune --- # Model Card for mistral-ko-OpenOrca-2000 It is a fine-tuned model using Korean in the mistral-7b model ## Model Details * **Model Developers** : shleeeee(Seunghyeon Lee), oopsung(Sungwoo Park) * **Repository** : To be added * **Model Architecture** : The shleeeee/mistral-ko-OpenOrca-2000 is is a fine-tuned version of the Mistral-7B-v0.1. * **Lora target modules** : q_proj, k_proj, v_proj, o_proj,gate_proj * **train_batch** : 4 * **epochs** : 2 ## Dataset 2000 ko-OpenOrca datasets ## Prompt template: Mistral ``` [INST]{['instruction']}[/INST]{['output']} ``` ## Usage ``` # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("shleeeee/mistral-ko-OpenOrca-2000") model = AutoModelForCausalLM.from_pretrained("shleeeee/mistral-ko-OpenOrca-2000") # Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="shleeeee/mistral-ko-OpenOrca-2000") ``` ## Evaluation To be added