400m-Instruct / README.md
Vizzier's picture
Migrated Llama-3.2 1b model
22cf2fd verified
---
library_name: transformers
language:
- am
base_model:
- rasyosef/Llama-3.2-400M-Amharic
---
This model is an Instruction-Tuned version of [Llama 3.2 400M Amharic](https://huggingface.co/rasyosef/Llama-3.2-400M-Amharic).
### How to use
### Chat Format
Given the nature of the training data, the phi-2 instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follows:
```markdown
<|im_start|>user
αŒ₯ያቄ?<|im_end|>
<|im_start|>assistant
```
For example:
```markdown
<|im_start|>user
αˆΆαˆ΅α‰΅ α‹¨αŠ ααˆͺካ αˆ€αŒˆαˆ«α‰΅ αŒ₯α‰€αˆ΅αˆαŠ<|im_end|>
<|im_start|>assistant
```
where the model generates the text after `<|im_start|>assistant` .
### Sample inference code
First, you need to install the latest version of transformers
```
pip install -Uq transformers
```
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
llama3_am = pipeline(
"text-generation",
model="rasyosef/Llama-3.2-400M-Amharic-Instruct",
device_map="auto"
)
messages = [{"role": "user", "content": "αˆΆαˆ΅α‰΅ α‹¨αŠ ααˆͺካ αˆ€αŒˆαˆ«α‰΅ αŒ₯α‰€αˆ΅αˆαŠ"}]
llama3_am(messages, max_new_tokens=128, repetition_penalty=1.1, return_full_text=False)
```
Output:
```python
[{'generated_text': '1. ግα‰₯ፅ 2. αŠ“α‹­αŒ„αˆͺα‹« 3. αŒ‹αŠ“'}]
```