NLP-07-ODQA/gemma-ko-2b-lora-v1
Experiment: gemma-ko-2b-lora-v1
This model was trained for the Generation for NLP competition.
Model Details
- Organization: NLP-07-ODQA
- Experiment: gemma-ko-2b-lora-v1
- Checkpoint: best_model
- Original Checkpoint: checkpoint-4491 (This is the best model selected during training)
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("NLP-07-ODQA/gemma-ko-2b-lora-v1")
tokenizer = AutoTokenizer.from_pretrained("NLP-07-ODQA/gemma-ko-2b-lora-v1")
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support