File size: 1,426 Bytes
6e93647 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 | ---
language: en
tags:
- jamba
- lora
- chat
- fine-tuning
license: apache-2.0
---
# Jamba Chat LoRA
This is a LoRA fine-tuned version of the Jamba model trained on chat conversations.
## Model Description
- **Base Model:** LaferriereJC/jamba_550M_trained
- **Training Data:** UltraChat dataset
- **Task:** Conversational AI
- **Fine-tuning Method:** LoRA (Low-Rank Adaptation)
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel, PeftConfig
# Load the model
model = AutoModelForCausalLM.from_pretrained(
"LaferriereJC/jamba_550M_trained",
trust_remote_code=True
)
model = PeftModel.from_pretrained(model, "your-username/jamba-chat-lora")
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("LaferriereJC/jamba_550M_trained")
# Example usage
text = "User: How are you today?\nAssistant:"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Training Details
- **Training Data:** UltraChat dataset (subset)
- **LoRA Config:**
- Rank: 16
- Alpha: 32
- Target Modules: Last layer feed forward experts
- Dropout: 0.1
- **Training Parameters:**
- Learning Rate: 5e-4
- Optimizer: AdamW (32-bit)
- LR Scheduler: Cosine
- Warmup Ratio: 0.03
|