Model Card for ambari-7b-lora-dora-cot
This model is a fine-tuned version of Cognitive-Lab/Ambari-7B-Instruct-v0.2. It has been trained using TRL.
Quick start
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Akshaymp/ambari-7b-lora-dora-cot", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
Training procedure
This model was trained with SFT.
Framework versions
- TRL: 0.24.0
- Transformers: 4.57.3
- Pytorch: 2.9.1
- Datasets: 4.3.0
- Tokenizers: 0.22.1
Chain-of-Thought (CoT) Distillation Process
This model implements Chain-of-Thought (CoT) distillation combined with LoRA and DoRA optimization techniques to enhance reasoning capabilities while maintaining computational efficiency.
CoT Distillation Methodology
The CoT distillation process involves:
- Reasoning Trace Generation: Intermediate reasoning steps are captured from a larger teacher model
- Step-wise Supervision: Training signals are provided at each reasoning step, not just the final output
- Knowledge Compression: Dense reasoning knowledge is compressed into the 7B parameter model
- Adaptive Learning: LoRA modules selectively optimize layers involved in reasoning tasks
LoRA Integration for CoT
LoRA Configuration:
r(rank): 16 - Balances adaptation capacity with parameter efficiencylora_alpha: 16 - Scaling factor for LoRA updateslora_dropout: 0.0 - No dropout applied to LoRA layerstarget_modules: Applied to attention and feedforward projections:- Query projections (
q_proj) - Key projections (
k_proj) - Value projections (
v_proj) - Output projections (
o_proj) - Gate projections (
gate_proj) - Up/Down projections (
up_proj,down_proj)
- Query projections (
DoRA (Dimension-wise Ranking Adaptation)
DoRA is enabled (use_dora: true) to further optimize the LoRA adaptation:
- Decomposes weight updates into magnitude and direction components
- Applies rank-restricted updates with improved generalization
- Reduces overfitting during CoT-specific fine-tuning
- Maintains baseline model's general knowledge while adapting for reasoning
Training Datasets
The model has been fine-tuned on diverse task-specific datasets from the base DoRA model:
- Kanglish Shopping Queries: Understanding and processing shopping-related queries in Kanglish (Kannada written in Roman script)
- Multi-turn Conversations: Handling multi-turn dialogue with context maintenance across multiple exchanges
- Kanglish to English Translation: Translation capability from Kanglish to English
- English to Kanglish Translation: Translation capability from English to Kanglish
Each dataset includes CoT annotations that provide reasoning steps, enabling the model to learn explicit reasoning patterns.
Performance Characteristics
- Memory Efficient: LoRA+DoRA reduces trainable parameters from millions to thousands
- Reasoning Enhanced: CoT distillation improves multi-step reasoning capabilities
- Fast Inference: LoRA modules can be merged post-training for zero inference overhead
- Task Specific: Maintains general capabilities while excelling at specialized reasoning tasks
Citations
Cite TRL as:
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Akshaymp/ambari-7b-lora-dora-cot
Base model
Cognitive-Lab/Ambari-7B-Instruct-v0.2