LoRA Adapter for DippyResearch/super-cot-v1
This is a LoRA (Low-Rank Adaptation) adapter trained with VERL for roleplay tasks.
Usage
With vLLM
from vllm import LLM
llm = LLM(
model="/workspace/models/Cydonia-24B-v4.1",
enable_lora=True,
lora_modules=["DippyResearch/super-cot-v1"]
)
With Transformers + PEFT
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model = AutoModelForCausalLM.from_pretrained("/workspace/models/Cydonia-24B-v4.1")
model = PeftModel.from_pretrained(base_model, "DippyResearch/super-cot-v1")
Training Details
- Framework: VERL (GRPO)
- Base Model: Cydonia-24B-v4.1
- Training Type: Roleplay fine-tuning
- Checkpoint: actor