See axolotl config
axolotl version: 0.4.1
adapter: lora
base_model: princeton-nlp/Sheared-LLaMA-1.3B
bf16: true
chat_template: llama3
cosine_min_lr_ratio: 0.3
dataset_prepared_path: null
datasets:
- data_files:
- 9111ab42dd82442c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9111ab42dd82442c_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 4
eval_max_new_tokens: 128
eval_steps: 200
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: Romain-XV/ff233829-a82a-4192-8f8d-1f8a03a1a77d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 6052
micro_batch_size: 4
mlflow_experiment_name: /tmp/9111ab42dd82442c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 200
sequence_len: 2048
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.045059252917586626
wandb_entity: null
wandb_mode: online
wandb_name: 71ac6567-ec89-44ef-90cf-3963c0514683
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 71ac6567-ec89-44ef-90cf-3963c0514683
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
ff233829-a82a-4192-8f8d-1f8a03a1a77d
This model is a fine-tuned version of princeton-nlp/Sheared-LLaMA-1.3B on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.6631
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 6052
Training results
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 2.8134 | 0.0002 | 1 | 2.7262 |
| 2.0828 | 0.0302 | 200 | 2.1091 |
| 2.1855 | 0.0604 | 400 | 2.0203 |
| 1.8754 | 0.0906 | 600 | 1.9702 |
| 2.0078 | 0.1208 | 800 | 1.9297 |
| 1.9068 | 0.1510 | 1000 | 1.9031 |
| 2.1842 | 0.1812 | 1200 | 1.8784 |
| 1.9692 | 0.2114 | 1400 | 1.8576 |
| 1.5986 | 0.2416 | 1600 | 1.8413 |
| 1.8089 | 0.2718 | 1800 | 1.8265 |
| 2.0088 | 0.3020 | 2000 | 1.8140 |
| 1.6978 | 0.3322 | 2200 | 1.8014 |
| 1.6894 | 0.3624 | 2400 | 1.7903 |
| 1.3886 | 0.3926 | 2600 | 1.7749 |
| 1.7118 | 0.4228 | 2800 | 1.7639 |
| 1.7178 | 0.4530 | 3000 | 1.7588 |
| 1.7374 | 0.4832 | 3200 | 1.7460 |
| 1.7989 | 0.5134 | 3400 | 1.7356 |
| 1.7476 | 0.5436 | 3600 | 1.7266 |
| 2.116 | 0.5738 | 3800 | 1.7193 |
| 1.7626 | 0.6040 | 4000 | 1.7119 |
| 1.83 | 0.6342 | 4200 | 1.7047 |
| 1.8368 | 0.6644 | 4400 | 1.6997 |
| 1.3722 | 0.6945 | 4600 | 1.6924 |
| 1.9036 | 0.7247 | 4800 | 1.6884 |
| 1.6224 | 0.7549 | 5000 | 1.6830 |
| 1.5629 | 0.7851 | 5200 | 1.6776 |
| 1.9096 | 0.8153 | 5400 | 1.6730 |
| 1.8817 | 0.8455 | 5600 | 1.6696 |
| 1.7826 | 0.8757 | 5800 | 1.6655 |
| 1.545 | 0.9059 | 6000 | 1.6631 |
Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for R0mAI/ff233829-a82a-4192-8f8d-1f8a03a1a77d
Base model
princeton-nlp/Sheared-LLaMA-1.3B