Built with Axolotl

See axolotl config

axolotl version: 0.10.0.dev0

adapter: lora
base_model: samoline/98c53cf2-34da-431b-92e4-57bb8539d7f4
bf16: true
datasets:
- data_files:
  - 3aa29942afd67a84_train_data.json
  ds_type: json
  format: custom
  path: /workspace/input_data/
  type:
    field_input: input
    field_instruction: instruct
    field_output: output
    format: '{instruction} {input}'
    no_input_format: '{instruction}'
    system_format: '{system}'
    system_prompt: ''
eval_max_new_tokens: 128
evals_per_epoch: 4
flash_attention: false
fp16: false
gradient_accumulation_steps: 1
gradient_checkpointing: true
group_by_length: true
hub_model_id: apriasmoro/deeb7d6a-e618-492a-bfae-7917fff30ceb
learning_rate: 0.0002
load_in_4bit: false
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: false
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 11
micro_batch_size: 16
mlflow_experiment_name: /tmp/3aa29942afd67a84_train_data.json
output_dir: llama3_lora_output
rl: null
sample_packing: true
save_steps: 1
sequence_len: 2048
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: true
trl: null
trust_remote_code: true
wandb_name: 157e4cf4-35e8-4bc9-9430-1b1363f2eb99
wandb_project: Gradients-On-Demand
wandb_run: llama3_h200_run
wandb_runid: 157e4cf4-35e8-4bc9-9430-1b1363f2eb99
warmup_steps: 100
weight_decay: 0.01

deeb7d6a-e618-492a-bfae-7917fff30ceb

This model is a fine-tuned version of samoline/98c53cf2-34da-431b-92e4-57bb8539d7f4 on an unknown dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • training_steps: 11

Training results

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.5.1+cu124
  • Datasets 3.5.1
  • Tokenizers 0.21.1
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for apriasmoro/deeb7d6a-e618-492a-bfae-7917fff30ceb

Adapter
(2)
this model