Built with Axolotl

See axolotl config

axolotl version: 0.4.1

adapter: lora
base_model: katuni4ka/tiny-random-falcon-40b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
  - 9377853013503169_train_data.json
  ds_type: json
  format: custom
  path: /workspace/input_data/9377853013503169_train_data.json
  type:
    field_input: system
    field_instruction: prompt
    field_output: output
    format: '{instruction} {input}'
    no_input_format: '{instruction}'
    system_format: '{system}'
    system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
hub_model_id: Romain-XV/84f6ffaf-61c0-43d3-ab02-80efdcb5015a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.00025
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 5880
micro_batch_size: 4
mlflow_experiment_name: /tmp/9377853013503169_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 2048
special_tokens:
  pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.02552348671247282
wandb_entity: null
wandb_mode: online
wandb_name: 88bd9ed0-784e-40ee-bad6-32ea4fdd723a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 88bd9ed0-784e-40ee-bad6-32ea4fdd723a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null

84f6ffaf-61c0-43d3-ab02-80efdcb5015a

This model is a fine-tuned version of katuni4ka/tiny-random-falcon-40b on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 10.5489

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.00025
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • training_steps: 5880

Training results

Training Loss Epoch Step Validation Loss
88.9048 0.0002 1 11.1152
86.298 0.0168 100 10.7819
85.9568 0.0335 200 10.7421
85.6764 0.0503 300 10.7206
85.7664 0.0671 400 10.7042
85.5889 0.0838 500 10.6877
85.5986 0.1006 600 10.6747
85.3208 0.1173 700 10.6651
85.4494 0.1341 800 10.6567
85.3258 0.1509 900 10.6505
84.9632 0.1676 1000 10.6445
85.2519 0.1844 1100 10.6386
85.2217 0.2012 1200 10.6306
85.3373 0.2179 1300 10.6254
84.9467 0.2347 1400 10.6200
85.0272 0.2514 1500 10.6153
85.3044 0.2682 1600 10.6097
85.0647 0.2850 1700 10.6045
85.2542 0.3017 1800 10.5977
85.1251 0.3185 1900 10.5894
84.6675 0.3353 2000 10.5847
84.9831 0.3520 2100 10.5812
84.7303 0.3688 2200 10.5778
84.4756 0.3855 2300 10.5755
84.6714 0.4023 2400 10.5724
84.8902 0.4191 2500 10.5702
84.8641 0.4358 2600 10.5674
84.6129 0.4526 2700 10.5662
84.6396 0.4694 2800 10.5645
84.5829 0.4861 2900 10.5631
84.4782 0.5029 3000 10.5616
84.6577 0.5196 3100 10.5611
84.5671 0.5364 3200 10.5595
84.6259 0.5532 3300 10.5582
84.4862 0.5699 3400 10.5574
84.8068 0.5867 3500 10.5564
84.3837 0.6035 3600 10.5558
84.7825 0.6202 3700 10.5550
84.5206 0.6370 3800 10.5545
84.5291 0.6537 3900 10.5537
84.5388 0.6705 4000 10.5531
84.5735 0.6873 4100 10.5526
84.5167 0.7040 4200 10.5518
84.5027 0.7208 4300 10.5516
84.5048 0.7376 4400 10.5513
84.5671 0.7543 4500 10.5508
84.5944 0.7711 4600 10.5502
84.29 0.7878 4700 10.5501
84.299 0.8046 4800 10.5498
84.5276 0.8214 4900 10.5495
84.5476 0.8381 5000 10.5494
84.3845 0.8549 5100 10.5491
84.3847 0.8717 5200 10.5489
84.6318 0.8884 5300 10.5491
84.5371 0.9052 5400 10.5489
84.5082 0.9219 5500 10.5490
84.2422 0.9387 5600 10.5489
84.2555 0.9555 5700 10.5488
84.6036 0.9722 5800 10.5489

Framework versions

  • PEFT 0.13.2
  • Transformers 4.46.0
  • Pytorch 2.5.0+cu124
  • Datasets 3.0.1
  • Tokenizers 0.20.1
Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for R0mAI/84f6ffaf-61c0-43d3-ab02-80efdcb5015a

Adapter
(206)
this model