grimulkan/LimaRP-augmented
Viewer • Updated • 804 • 179 • 33
How to use ToastyPigeon/muse-marvin-lora-2 with PEFT:
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("LatitudeGames/Muse-12B")
model = PeftModel.from_pretrained(base_model, "ToastyPigeon/muse-marvin-lora-2")How to use ToastyPigeon/muse-marvin-lora-2 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="ToastyPigeon/muse-marvin-lora-2")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("ToastyPigeon/muse-marvin-lora-2")
model = AutoModelForCausalLM.from_pretrained("ToastyPigeon/muse-marvin-lora-2")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use ToastyPigeon/muse-marvin-lora-2 with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "ToastyPigeon/muse-marvin-lora-2"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "ToastyPigeon/muse-marvin-lora-2",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/ToastyPigeon/muse-marvin-lora-2
How to use ToastyPigeon/muse-marvin-lora-2 with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "ToastyPigeon/muse-marvin-lora-2" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "ToastyPigeon/muse-marvin-lora-2",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "ToastyPigeon/muse-marvin-lora-2" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "ToastyPigeon/muse-marvin-lora-2",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use ToastyPigeon/muse-marvin-lora-2 with Docker Model Runner:
docker model run hf.co/ToastyPigeon/muse-marvin-lora-2
axolotl version: 0.13.0.dev0
# !pip install transformers==4.55.4
# !pip install --no-deps trl==0.22.2
# !pip install --no-build-isolation mamba_ssm==2.2.5
# !pip install --no-build-isolation causal_conv1d==1.5.2
# === Model Configuration ===
base_model: LatitudeGames/Muse-12B
load_in_8bit: false
load_in_4bit: true
# === HF Configuration ===
hub_model_id: ToastyPigeon/muse-marvin-lora-2
hub_strategy: "every_save"
output_dir: ckpts-mmarv
# === Training Setup ===
num_epochs: 1
micro_batch_size: 1
gradient_accumulation_steps: 4
sequence_len: 16384
#sequence_parallel_degree: 2
#heads_k_stride: 1
sample_packing: true
pad_to_sequence_len: true
#temperature: 0.7
#max_steps: 10
# === Evaluation ===
val_set_size: 0.025
evals_per_epoch: 10
#eval_steps: 20
#max_steps: 60
#eval_table_size:
eval_max_new_tokens: 128
#eval_sample_packing: true
#eval_strategy: "no"
# === LoRA Configuration ===
adapter: qlora
lora_model_dir:
lora_r: 32
lora_alpha: 32
lora_dropout: 0.1
lora_target_linear: true
lora_target_modules:
# - q_proj
# - v_proj
# - k_proj
# - o_proj
lora_fan_in_fan_out:
peft_use_rslora: false
#lora_modules_to_save:
# - embed_tokens
# - lm_head
#fix_untrained_tokens: true
#lora_mlp_kernel: true
#lora_qkv_kernel: true
#lora_o_kernel: true
# === Hyperparameter Configuration ===
#optimizer: apollo_adamw_layerwise
#warmup_steps: 0
warmup_ratio: 0.025
optimizer: adamw_torch_fused
#optimizer: paged_adamw_8bit
#optim_args:
# enable_stochastic_rounding: true
# enable_cautious: true
# enable_8bit: true
# Apollo-mini configuration:
#optim_args: "proj=random,rank=128,scale=128.0,scale_type=tensor,update_proj_gap=100"
# Regular Apollo configuration:
# optim_args:
#optim_target_modules: all_linear
learning_rate: 1e-5
lr_scheduler: cosine
#cosine_min_lr_ratio: 0.2
#lr_scheduler: cosine_with_min_lr
#lr_scheduler_kwargs:
# cosine_min_lr: 1e-6
weight_decay: 0.01
max_grad_norm: 1.0
#warmup_steps: 0
#warmup_ratio: 0.025
# === Data Configuration ===
#
#chat_template: jinja
#chat_template: chatml
special_tokens:
# eos_token: "<|im_end|>"
# eos_token: "</s>"
#tokenizer_use_mistral_common: true
shuffle_merged_datasets: true
datasets:
- path: grimulkan/LimaRP-augmented
type: chat_template
field_messages: conversations
message_property_mappings:
role: from
content: value
# - path: allenai/tulu-3-sft-personas-instruction-following
# type: chat_template
# split: train[:10%]
# - path: ToastyPigeon/mixed-medical-reasoning-formatted
# type: chat_template
# data_files: mixed-medical-thinking.json
# split: train[:10%]
- path: ToastyPigeon/steve-and-marvin
type: completion
data_files: marvin.json
- path: ToastyPigeon/kimi-stories-completion
type: completion
# - path: ToastyPigeon/new-story-dataset
# type: customcompletion-regex
# type: completion
# data_files: new-story-dataset-v2.json
# - path: allura-org/fujin-instruct-v2
# type: customchatml-regex
# type: chat_template
# field_messages: conversations
# message_property_mappings:
# role: from
# content: value
# - path: ToastyPigeon/some-rp-extended
# type: customchatml-regex
# type: chat_template
# field_messages: conversations
# message_property_mappings:
# role: from
# content: value
# roles_to_train: ["user","assistant"]
# - path: ToastyPigeon/gutenberg-sft
# type: customchatml-regex
# type: chat_template
# field_messages: conversations
# message_property_mappings:
# role: from
# content: value
# - path: ToastyPigeon/SpringDragon
# type: customcompletion-regex
# type: completion
# split: train
# - path: ToastyPigeon/some-erotica
# type: customcompletion-regex
# type: completion
# split: train[:10%]
dataset_prepared_path: last_run_prepared
# === Plugins ===
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
# === Hardware Optimization ===
#gradient_checkpointing: true
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
#liger_fused_linear_cross_entropy: true
cut_cross_entropy: true
#deepspeed: ../axolotl/deepspeed_configs/zero3_bf16_cpuoffload_params.json
# === FSDP Config ===
fsdp:
- full_shard
- auto_wrap
fsdp_config:
fsdp_limit_all_gathers: true
fsdp_sync_module_states: true
fsdp_offload_params: true
fsdp_activation_checkpointing: true
fsdp_use_orig_params: false
fsdp_cpu_ram_efficient_loading: true
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_transformer_layer_cls_to_wrap: MistralDecoderLayer
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sharding_strategy: FULL_SHARD
# fsdp_version: 2
# === Wandb Tracking ===
wandb_project: MuseMarvin
# wandb_entity: [WANDB_ENTITY]
wandb_name: r32-qlora-all-linear
# === Checkpointing ===
#save_steps: 10
saves_per_epoch: 10
save_total_limit: 1
# === Advanced Settings ===
bf16: auto
flash_attention: true
train_on_inputs: false
group_by_length: false
save_safetensors: true
logging_steps: 1
gc_steps: 10
seed: 69
This model is a fine-tuned version of LatitudeGames/Muse-12B on the grimulkan/LimaRP-augmented, the ToastyPigeon/steve-and-marvin and the ToastyPigeon/kimi-stories-completion datasets. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Active (gib) | Allocated (gib) | Reserved (gib) |
|---|---|---|---|---|---|---|
| No log | 0 | 0 | 2.5323 | 8.04 | 6.73 | 8.36 |
| 2.5701 | 0.1032 | 24 | 2.4684 | 5.33 | 5.32 | 7.1 |
| 2.4005 | 0.2065 | 48 | 2.4408 | 5.33 | 5.32 | 7.1 |
| 2.358 | 0.3097 | 72 | 2.4302 | 5.33 | 5.32 | 7.1 |
| 2.2869 | 0.4129 | 96 | 2.4240 | 5.33 | 5.32 | 7.1 |
| 2.4939 | 0.5161 | 120 | 2.4198 | 5.33 | 5.32 | 7.1 |
| 2.6741 | 0.6194 | 144 | 2.4175 | 5.33 | 5.32 | 7.1 |
| 2.3114 | 0.7226 | 168 | 2.4160 | 5.33 | 5.32 | 7.1 |
| 2.3292 | 0.8258 | 192 | 2.4153 | 5.33 | 5.32 | 7.1 |
| 2.5872 | 0.9290 | 216 | 2.4151 | 5.33 | 5.32 | 7.1 |