File size: 1,983 Bytes
38f0eef
 
 
 
 
 
 
 
 
9b93cb5
38f0eef
d36b482
 
38f0eef
 
 
 
 
 
 
 
 
 
175f38d
 
a39d30d
04e0328
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
175f38d
d36b482
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- fr
- en
datasets:
- jpacifico/French-Alpaca-dataset-Instruct-55K
---

# Uploaded finetuned  model

- **Developed by:** mintujohnson
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit

This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

# Inference

```python
from unsloth import FastLanguageModel
from transformers import TextStreamer


model_path = "mintujohnson/Llama-3.2-3B-French-Instruct"
model, tokenizer = FastLanguageModel.from_pretrained(model_name = model_path, max_seq_length = 128, 
                                                    dtype = None, load_in_4bit = True)

def inference(messages, model, tokenizer):

    FastLanguageModel.for_inference(model) # Enable native 2x faster inference


    inputs = tokenizer.apply_chat_template(
                                                messages, tokenize = True,
                                                add_generation_prompt = True, # Must add for generation
                                                return_tensors = "pt",
                                            ).to("cuda")
    print(tokenizer.decode(inputs[0], skip_special_tokens=False))
    text_streamer = TextStreamer(tokenizer, skip_prompt = True)
    _ = model.generate(
                            input_ids = inputs, streamer = text_streamer, max_new_tokens = 128,
                            use_cache = True, temperature = 1.5, min_p = 0.1)

messages = [
                {"role": "user", "content": "où est la Normandie?"},
            ]
output = inference(messages, model, tokenizer)
```

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)