PFSP LLaMA 8B Fine-tuned Model

Model Description

Permutation Flow Scheduling Problem (PFSP) 최적화를 위해 파인튜닝된 LLaMA 8B 모델입니다.

Usage

from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="sooyeon1/pfsp_llama8b_default_r64_ep2",
    max_seq_length=40000,
    load_in_4bit=True,
    dtype=torch.bfloat16,
)
FastLanguageModel.for_inference(model)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for sooyeon1/pfsp_llama8b_default_r64_ep2