NLP-07-ODQA/llama3-8b-openko-v2
Experiment: llama3-8b-openko-v2
This model was trained for the Generation for NLP competition.
Model Details
- Organization: NLP-07-ODQA
- Experiment: llama3-8b-openko-v2
- Checkpoint: best_model
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("NLP-07-ODQA/llama3-8b-openko-v2")
tokenizer = AutoTokenizer.from_pretrained("NLP-07-ODQA/llama3-8b-openko-v2")
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support