mpasila commited on
Commit
9aad0b1
·
verified ·
1 Parent(s): d6e8176

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -2
README.md CHANGED
@@ -11,15 +11,75 @@ language:
11
  - en
12
  datasets:
13
  - mpasila/BadVibesV1-16k-context
 
 
 
 
14
  library_name: peft
15
  ---
16
 
17
- # Details added soon!!
18
 
19
- Prompt format: ChatML (may be messed up by Unsloth atm)
 
 
 
 
 
 
 
 
 
 
20
 
21
  Merged: [mpasila/BadVibesNemo-12B](https://huggingface.co/mpasila/BadVibesNemo-12B)
22
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
  # Uploaded BadVibesNemo-LoRA-12B model
24
 
25
  - **Developed by:** mpasila
 
11
  - en
12
  datasets:
13
  - mpasila/BadVibesV1-16k-context
14
+ - adamo1139/4chan_archive_ShareGPT_fixed_newlines_unfiltered
15
+ - Fizzarolli/fse-raw-dump
16
+ - R-Arfin/Depression
17
+ - ShiniChien/creepypasta
18
  library_name: peft
19
  ---
20
 
21
+ Uses this dataset: [mpasila/BadVibesV1-16k-context](https://huggingface.co/datasets/mpasila/BadVibesV1-16k-context)
22
 
23
+ ## Details about the dataset:
24
+
25
+ It is a combination of these datasets (which have been filtered/processed for ShareGPT format and made sure they don't exceed 16k context length based on [unsloth/Ministral-3-8B-Base-2512](https://huggingface.co/unsloth/Ministral-3-8B-Base-2512)'s tokenizer):
26
+ - 3216 entries from [adamo1139/4chan_archive_ShareGPT_fixed_newlines_unfiltered](https://huggingface.co/datasets/adamo1139/4chan_archive_ShareGPT_fixed_newlines_unfiltered)
27
+ - 19962 entries from [Fizzarolli/fse-raw-dump](https://huggingface.co/datasets/Fizzarolli/fse-raw-dump)
28
+ - 11547 entries from [R-Arfin/Depression](https://huggingface.co/datasets/R-Arfin/Depression)
29
+ - 5060 entries from [ShiniChien/creepypasta](https://huggingface.co/datasets/ShiniChien/creepypasta)
30
+
31
+ The data was also combined and shuffled. Total entries: 39785
32
+
33
+ # Prompt format: ChatML (may be messed up by Unsloth atm)
34
 
35
  Merged: [mpasila/BadVibesNemo-12B](https://huggingface.co/mpasila/BadVibesNemo-12B)
36
 
37
+ # Training params
38
+
39
+ Trained at 16384 context window in 4-bit.
40
+
41
+ ```
42
+ model = FastLanguageModel.get_peft_model(
43
+ model,
44
+ r = 128, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128
45
+ target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
46
+ "gate_proj", "up_proj", "down_proj",],
47
+ lora_alpha = 32,
48
+ lora_dropout = 0, # Supports any, but = 0 is optimized
49
+ bias = "none", # Supports any, but = "none" is optimized
50
+ # [NEW] "unsloth" uses 30% less VRAM, fits 2x larger batch sizes!
51
+ use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context
52
+ random_state = 3407,
53
+ use_rslora = False, # We support rank stabilized LoRA
54
+ loftq_config = None, # And LoftQ
55
+ )
56
+ ```
57
+
58
+ ```
59
+ from trl import SFTTrainer, SFTConfig
60
+ trainer = SFTTrainer(
61
+ model = model,
62
+ tokenizer = tokenizer,
63
+ train_dataset = dataset,
64
+ eval_dataset = None, # Can set up evaluation!
65
+ args = SFTConfig(
66
+ dataset_text_field = "text",
67
+ per_device_train_batch_size = 2,
68
+ gradient_accumulation_steps = 4, # Use GA to mimic batch size!
69
+ warmup_steps = 10,
70
+ num_train_epochs = 1, # Set this for 1 full training run.
71
+ #max_steps = 60,
72
+ learning_rate = 2e-4, # Reduce to 2e-5 for long training runs
73
+ logging_steps = 1,
74
+ optim = "adamw_8bit",
75
+ weight_decay = 0.001,
76
+ lr_scheduler_type = "linear",
77
+ seed = 3407,
78
+ report_to = "none", # Use TrackIO/WandB etc
79
+ ),
80
+ )
81
+ ```
82
+
83
  # Uploaded BadVibesNemo-LoRA-12B model
84
 
85
  - **Developed by:** mpasila