File size: 2,366 Bytes
f4dcb55 c8e3c22 f4dcb55 a2f572a f4dcb55 a2f572a f4dcb55 a2f572a f4dcb55 fb7d395 a2f572a f9a712c b8b856e f9a712c c8e3c22 f9a712c 15c3ec6 f9a712c a82db7c | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 | ---
library_name: peft
base_model: LSX-UniWue/LLaMmlein_1B_prerelease
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: LLaMmlein_1b_chat_all
results: []
datasets:
- LSX-UniWue/Guanako
- FreedomIntelligence/sharegpt-deutsch
- FreedomIntelligence/alpaca-gpt4-deutsch
language:
- de
license: other
---
# LLäMmlein 1B Chat

> [!WARNING]
> While the base versions of our LLäMmlein are quite good, our chat versions are research demonstrations and are not ready to be used in settings where close instruction following is necessary. Please check the paper for more details.
This is a chat adapter for the German Tinyllama 1B language model.
Find more details on our [page](https://www.informatik.uni-wuerzburg.de/datascience/projects/nlp/llammlein/) and our [preprint](arxiv.org/abs/2411.11171)!
We also merged the adapter and converted it to GGUF [here](LSX-UniWue/LLaMmlein_1B_alternative_formats).
## Run it
```py
import torch
from peft import PeftConfig, PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.manual_seed(42)
# script config
base_model_name = "LSX-UniWue/LLaMmlein_1B_prerelease"
chat_adapter_name = "LSX-UniWue/LLaMmlein_1B_chat_selected"
device = "cuda" # or mps
# chat history
messages = [
{
"role": "user",
"content": """Na wie geht's?""",
},
]
# load model
config = PeftConfig.from_pretrained(chat_adapter_name)
base_model = model = AutoModelForCausalLM.from_pretrained(
base_model_name,
torch_dtype=torch.bfloat16,
device_map=device,
)
base_model.resize_token_embeddings(32064)
model = PeftModel.from_pretrained(base_model, chat_adapter_name)
tokenizer = AutoTokenizer.from_pretrained(chat_adapter_name)
# encode message in "ChatML" format
chat = tokenizer.apply_chat_template(
messages,
return_tensors="pt",
add_generation_prompt=True,
).to(device)
# generate response
print(
tokenizer.decode(
model.generate(
chat,
max_new_tokens=300,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
)[0],
skip_special_tokens=False,
)
)
```
[Data Take Down](https://www.informatik.uni-wuerzburg.de/datascience/projects/nlp/llammlein/) |