linabot/train_data
Viewer • Updated • 1.74k • 13
How to use lina5555/unmaskednemo with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="lina5555/unmaskednemo")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("lina5555/unmaskednemo")
model = AutoModelForCausalLM.from_pretrained("lina5555/unmaskednemo")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use lina5555/unmaskednemo with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "lina5555/unmaskednemo"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "lina5555/unmaskednemo",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/lina5555/unmaskednemo
How to use lina5555/unmaskednemo with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "lina5555/unmaskednemo" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "lina5555/unmaskednemo",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "lina5555/unmaskednemo" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "lina5555/unmaskednemo",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use lina5555/unmaskednemo with Docker Model Runner:
docker model run hf.co/lina5555/unmaskednemo
axolotl version: 0.8.0
base_model: mistralai/Mistral-Nemo-Instruct-2407
model_type: MistralForCausalLM
hub_model_id: Alignment-Lab-AI/linabot
strict: false
chat_template: tokenizer_default
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_layer_norm: true
liger_fused_linear_cross_entropy: true
datasets:
- path: linabot/train_data
type: chat_template
field_messages: messages
message_property_mappings:
role: role
content: content
roles_to_train: ['assistant', 'user']
train_on_eos: turn
learning_rate: 2e-5
lr_scheduler: cosine
weight_decay: 0.03
warmup_steps: 450
dataset_prepared_path:
val_set_size: 0.2
output_dir: ./outputs/out
sequence_len: 10400
sample_packing: true
pad_to_sequence_len: true
eval_sample_packing: true
wandb_project: linabot
wandb_entity:
wandb_watch: all
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 4
num_epochs: 5
optimizer: adalomo
lr_scheduler: cosine
learning_rate: 0.0002024
flash_attention: true
flash_attn_cross_entropy: false
flash_attn_rms_norm: true
flash_attn_fuse_qkv: false
flash_attn_fuse_mlp: true
torch_compile_mode: "max-autotune"
bf16: auto
tf32: false
gradient_checkpointing: true
resume_from_checkpoint:
logging_steps: 1
evals_per_epoch: 8
saves_per_epoch: 1
weight_decay: 0.03
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
pad_token: "<pad>"
This model is a fine-tuned version of mistralai/Mistral-Nemo-Instruct-2407 on the linabot/train_data dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 1.526 | 0.0083 | 1 | 1.5474 |
| 1.5934 | 0.125 | 15 | 1.5472 |
| 1.5242 | 0.25 | 30 | 1.5454 |
| 1.5296 | 0.375 | 45 | 1.5408 |
| 1.5087 | 0.5 | 60 | 1.5322 |
| 1.486 | 0.625 | 75 | 1.5188 |
| 1.4314 | 0.75 | 90 | 1.5005 |
| 1.4311 | 0.875 | 105 | 1.4782 |
| 1.4532 | 1.0 | 120 | 1.4513 |
| 1.4215 | 1.125 | 135 | 1.4198 |
| 1.3248 | 1.25 | 150 | 1.3825 |
| 1.2697 | 1.375 | 165 | 1.3386 |
| 1.3281 | 1.5 | 180 | 1.2880 |
| 1.2428 | 1.625 | 195 | 1.2296 |
| 1.1533 | 1.75 | 210 | 1.1596 |
| 1.1038 | 1.875 | 225 | 1.0747 |
| 1.0226 | 2.0 | 240 | 0.9723 |
| 0.8858 | 2.125 | 255 | 0.8467 |
| 0.6762 | 2.25 | 270 | 0.7047 |
| 0.6433 | 2.375 | 285 | 0.5626 |
| 0.4017 | 2.5 | 300 | 0.4283 |
| 0.2875 | 2.625 | 315 | 0.3072 |
| 0.2244 | 2.75 | 330 | 0.2161 |
| 0.1445 | 2.875 | 345 | 0.1572 |
| 0.0898 | 3.0 | 360 | 0.1192 |
| 0.0666 | 3.125 | 375 | 0.0991 |
| 0.0605 | 3.25 | 390 | 0.0855 |
| 0.0457 | 3.375 | 405 | 0.0757 |
| 0.052 | 3.5 | 420 | 0.0700 |
| 0.0634 | 3.625 | 435 | 0.0658 |
| 0.0364 | 3.75 | 450 | 0.0623 |
| 0.045 | 3.875 | 465 | 0.0601 |
| 0.0395 | 4.0 | 480 | 0.0582 |
| 0.0558 | 4.125 | 495 | 0.0573 |
| 0.0468 | 4.25 | 510 | 0.0566 |
| 0.0399 | 4.375 | 525 | 0.0562 |
| 0.0337 | 4.5 | 540 | 0.0560 |
| 0.0413 | 4.625 | 555 | 0.0559 |
| 0.0318 | 4.75 | 570 | 0.0558 |
| 0.0435 | 4.875 | 585 | 0.0558 |
| 0.0445 | 5.0 | 600 | 0.0558 |
Base model
mistralai/Mistral-Nemo-Base-2407