--- base_model: SaintHoney/PersonalManV1.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en datasets: - diabolic6045/open-ocra-alpaca-cleaned - HashTag766/SMART-Goals-Validation --- # Overview #### Finetuned Qwen2.5-3B #### the training was for increasing the model capabilities on Instruction following and specific data. #### Training Time : 14.5h ### Datasets #### SMART-Goals-Validation------[https://huggingface.co/datasets/HashTag766/SMART-Goals-Validation] #### open-ocra-alpaca-cleaned----[https://huggingface.co/datasets/diabolic6045/open-ocra-alpaca-cleaned] only on 120000k examples # Uploaded model - **Developed by:** HashTag766 - **License:** apache-2.0 - **Finetuned from model :** SaintHoney/PersonalManV1.0 ## The code used for finetuning ```python %%capture !pip install pip3-autoremove !pip-autoremove torch torchvision torchaudio -y !pip install torch torchvision torchaudio xformers --index-url https://download.pytorch.org/whl/cu121 !pip install unsloth --------------------------------------------------------------------------------------------- from kaggle_secrets import UserSecretsClient user_secrets = UserSecretsClient() # from kaggle_secrets import UserSecretsClient hugging_face_token = user_secrets.get_secret("HF-Token") # Login to Hugging Face from huggingface_hub import login # Lets you login to API login(hugging_face_token) # from huggingface_hub import login --------------------------------------------------------------------------------------------- from unsloth import FastLanguageModel import torch max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally! dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+ load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False. model, tokenizer = FastLanguageModel.from_pretrained( model_name = "SaintHoney/PersonalManV1.0", max_seq_length = max_seq_length, dtype = dtype, load_in_4bit = load_in_4bit, ) --------------------------------------------------------------------------------------------- model = FastLanguageModel.get_peft_model( model, r = 16, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128 target_modules = ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj",], lora_alpha = 16, lora_dropout = 0, # Supports any, but = 0 is optimized bias = "none", # Supports any, but = "none" is optimized # [NEW] "unsloth" uses 30% less VRAM, fits 2x larger batch sizes! use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context random_state = 3407, use_rslora = False, # We support rank stabilized LoRA loftq_config = None, # And LoftQ ) --------------------------------------------------------------------------------------------- alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: {} ### Input: {} ### Response: {}""" EOS_TOKEN = tokenizer.eos_token # Must add EOS_TOKEN def formatting_prompts_func(examples): instructions = examples["instruction"] inputs = examples["input"] outputs = examples["output"] texts = [] for instruction, input, output in zip(instructions, inputs, outputs): # Must add EOS_TOKEN, otherwise your generation will go on forever! text = alpaca_prompt.format(instruction, input, output) + EOS_TOKEN texts.append(text) return { "text" : texts, } pass from datasets import load_dataset dataset = load_dataset("HashTag766/SMART-Goals-Validation", split = "train") # specify here the number of examples from dataset dataset = dataset.map(formatting_prompts_func, batched = True,) --------------------------------------------------------------------------------------------- from trl import SFTTrainer from transformers import TrainingArguments, DataCollatorForSeq2Seq from unsloth import is_bfloat16_supported trainer = SFTTrainer( model = model, tokenizer = tokenizer, train_dataset = dataset, dataset_text_field = "text", max_seq_length = max_seq_length, data_collator = DataCollatorForSeq2Seq(tokenizer = tokenizer), dataset_num_proc = 2, packing = False, # Can make training 5x faster for short sequences. args = TrainingArguments( per_device_train_batch_size = 2, gradient_accumulation_steps = 4, warmup_steps = 5, num_train_epochs = 3, # Set this for 1 full training run. # max_steps = 60, learning_rate = 2e-4, fp16 = not is_bfloat16_supported(), bf16 = is_bfloat16_supported(), logging_steps = 1, optim = "adamw_8bit", weight_decay = 0.01, lr_scheduler_type = "linear", seed = 3407, output_dir = "outputs", report_to = "none", # Use this for WandB etc ), ) trainer_stats = trainer.train() --------------------------------------------------------------------------------------------- model.push_to_hub("hf/model...", token = "...") # Online saving tokenizer.push_to_hub("hf/model...", token = "...") # Online saving ``` This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth)