NLP-07-ODQA/Qwen2.5-32B-Instruct-bnb-4bit_2

Experiment: Qwen2.5-32B-Instruct-bnb-4bit_2

This model was trained for the Generation for NLP competition.

Model Details

  • Organization: NLP-07-ODQA
  • Experiment: Qwen2.5-32B-Instruct-bnb-4bit_2
  • Checkpoint: best_model

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("NLP-07-ODQA/Qwen2.5-32B-Instruct-bnb-4bit_2")
tokenizer = AutoTokenizer.from_pretrained("NLP-07-ODQA/Qwen2.5-32B-Instruct-bnb-4bit_2")
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support