abideen's picture
Update README.md
34e13f9 verified
---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
# MulitLoRA-Mistral-Merging
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64e380b2e12618b261fa6ba0/A7u3zJDO6kUAdbS9pSIEt.jpeg)
MultiLoRA-Mistral-Merge is a MultiLoRA Ties Merge made with the following adapters using [🧜 AutoLoRAMerging](https://colab.research.google.com/drive/1cEj5p42NZ6Vg2HVYEGL2IM6n0G0gwvQU?usp=sharing):
* [Yhyu13/dolphin-2.6-mistral-7b-dpo-laser-function-calling-lora](https://huggingface.co/Yhyu13/dolphin-2.6-mistral-7b-dpo-laser-function-calling-lora)
* [predibase/legal](https://huggingface.co/predibase/legal)
* [predibase/wikisql](https://huggingface.co/predibase/wikisql)
The merged adapter can generate SQL statements, give legal advices, and perform function calling.
## 🧩 Configuration
```yaml
density: 0.2
merging_type: "ties"
weights: [2.0, 0.3, 0.7]
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate peft
from peft import PeftConfig, PeftModel
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
peft_model = "abideen/MulitLoRA-Mistral-Merging"
config = PeftConfig.from_pretrained(peft_model)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(peft_model)
model.resize_token_embeddings(len(tokenizer))
model = PeftModel.from_pretrained(model, peft_model)
prompt = "Table: Sports; Columns: ['Team', 'Head Coach', 'President', 'Home Ground', 'Location'] Natural Query: Who is the Head Coach of the team whose President is Mario Volarevic? SQL Query:" # @param {type:"string"}
messages = [
{"role": "user", "content": prompt},
]
text = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
inputs = tokenizer(text, return_tensors="pt") # , add_special_tokens=False)
inputs = {k: v for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=256,
do_sample=True,
top_p=0.95,
temperature=0.2,
repetition_penalty=1.2,
eos_token_id=tokenizer.eos_token_id,
)
print(tokenizer.decode(outputs[0]))
```