PEFT
Safetensors
German
llama
trl
sft
Generated from Trainer
File size: 1,907 Bytes
2e19b68
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a9f95aa
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
library_name: peft
base_model: LSX-UniWue/LLaMmlein_7B
tags:
- trl
- sft
- generated_from_trainer
datasets:
- LSX-UniWue/Guanako
- FreedomIntelligence/sharegpt-deutsch
- FreedomIntelligence/alpaca-gpt4-deutsch
language:
- de
license: other
---

# LLäMmlein 7B Chat

![image/png](https://cdn-uploads.huggingface.co/production/uploads/6070431e1a4c4d313032558b/br1m6JS0DOT_SGTHywfi3.png)

> [!WARNING]
> While the base versions of our LLäMmlein are quite good, our chat versions are research demonstrations and are not ready to be used in settings where close instruction following is necessary. Please check the paper for more details.

This is an early preview of our instruction-tuned 7B model, trained using limited German-language resources.
Please note that it is not the final version - we are actively working on improvements!

Find more details on our [page](https://www.informatik.uni-wuerzburg.de/datascience/projects/nlp/llammlein/) and our [preprint](arxiv.org/abs/2411.11171)!

## Example Usage

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("LSX-UniWue/LLaMmlein_7B_chat")
tokenizer = AutoTokenizer.from_pretrained("LSX-UniWue/LLaMmlein_7B_chat")
model = model.to("mps")

messages = [
    {
        "role": "user",
        "content": "Was sind die wichtigsten Sehenswürdigkeiten von Berlin?",
    },
]

chat = tokenizer.apply_chat_template(
    messages,
    return_tensors="pt",
    add_generation_prompt=True,
).to("mps")


print(
    tokenizer.decode(
        model.generate(
            chat,
            max_new_tokens=100,
            pad_token_id=tokenizer.pad_token_id,
            eos_token_id=tokenizer.eos_token_id,
            repetition_penalty=1.1,
        )[0],
        skip_special_tokens=False,
    )
)
```
[Data Take Down](https://www.informatik.uni-wuerzburg.de/datascience/projects/nlp/llammlein/)