File size: 1,374 Bytes
084b0a1
39f04be
8486aed
39f04be
8486aed
39f04be
 
8486aed
 
 
 
084b0a1
 
8486aed
 
084b0a1
8486aed
084b0a1
fe4dc14
084b0a1
8486aed
084b0a1
8486aed
084b0a1
8486aed
084b0a1
8486aed
084b0a1
8486aed
084b0a1
8486aed
084b0a1
8486aed
084b0a1
8486aed
084b0a1
8486aed
 
 
 
 
 
 
 
 
 
 
084b0a1
8486aed
084b0a1
 
 
39f04be
 
8486aed
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
library_name: peft
base_model: ziadrone/semanticoneplusaries1
tags:
- base_model:adapter:ziadrone/semanticoneplusaries1
- lora
- transformers
pipeline_tag: text-generation
model-index:
- name: semanticoneplusaries1
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# semanticoneplusaries1

This model is a fine-tuned version of [ziadrone/semanticoneplusaries1](https://huggingface.co/ziadrone/semanticoneplusaries1) on the None dataset.

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP

### Training results



### Framework versions

- PEFT 0.16.0
- Transformers 4.54.1
- Pytorch 2.7.1+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4