Built with Axolotl

See axolotl config

axolotl version: 0.4.1

adapter: lora
base_model: NousResearch/Yarn-Llama-2-7b-128k
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
  - 6874e86e93e7fb0f_train_data.json
  ds_type: json
  format: custom
  path: /workspace/input_data/6874e86e93e7fb0f_train_data.json
  type:
    field_input: captions
    field_instruction: ASR
    field_output: whole_caption
    format: '{instruction} {input}'
    no_input_format: '{instruction}'
    system_format: '{system}'
    system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
hub_model_id: Romain-XV/6323f7b3-fb5b-4d9a-b962-7e3fa5d7f00c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.00025
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 1020
micro_batch_size: 4
mlflow_experiment_name: /tmp/6874e86e93e7fb0f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 2048
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.019169353571058877
wandb_entity: null
wandb_mode: online
wandb_name: 0ad63791-9d3a-456e-bb0b-d41731800650
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0ad63791-9d3a-456e-bb0b-d41731800650
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null

6323f7b3-fb5b-4d9a-b962-7e3fa5d7f00c

This model is a fine-tuned version of NousResearch/Yarn-Llama-2-7b-128k on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2444

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.00025
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • training_steps: 1020

Training results

Training Loss Epoch Step Validation Loss
14.32 0.0001 1 1.7967
11.6005 0.0125 100 1.3561
11.0312 0.0250 200 1.3211
10.8028 0.0375 300 1.3008
11.4573 0.0500 400 1.2858
10.1087 0.0625 500 1.2735
10.3157 0.0750 600 1.2626
9.6447 0.0876 700 1.2543
9.4116 0.1001 800 1.2484
9.7464 0.1126 900 1.2453
10.1673 0.1251 1000 1.2444

Framework versions

  • PEFT 0.13.2
  • Transformers 4.46.0
  • Pytorch 2.5.0+cu124
  • Datasets 3.0.1
  • Tokenizers 0.20.1
Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for R0mAI/6323f7b3-fb5b-4d9a-b962-7e3fa5d7f00c

Adapter
(149)
this model

Evaluation results