| library_name: transformers | |
| language: | |
| - am | |
| base_model: | |
| - rasyosef/Llama-3.2-400M-Amharic | |
| This model is an Instruction-Tuned version of [Llama 3.2 400M Amharic](https://huggingface.co/rasyosef/Llama-3.2-400M-Amharic). | |
| ### How to use | |
| ### Chat Format | |
| Given the nature of the training data, the phi-2 instruct model is best suited for prompts using the chat format as follows. | |
| You can provide the prompt as a question with a generic template as follows: | |
| ```markdown | |
| <|im_start|>user | |
| α₯α«α?<|im_end|> | |
| <|im_start|>assistant | |
| ``` | |
| For example: | |
| ```markdown | |
| <|im_start|>user | |
| αΆα΅α΅ α¨α ααͺα« ααα«α΅ α₯αα΅αα<|im_end|> | |
| <|im_start|>assistant | |
| ``` | |
| where the model generates the text after `<|im_start|>assistant` . | |
| ### Sample inference code | |
| First, you need to install the latest version of transformers | |
| ``` | |
| pip install -Uq transformers | |
| ``` | |
| You can use this model directly with a pipeline for text generation: | |
| ```python | |
| from transformers import pipeline | |
| llama3_am = pipeline( | |
| "text-generation", | |
| model="rasyosef/Llama-3.2-400M-Amharic-Instruct", | |
| device_map="auto" | |
| ) | |
| messages = [{"role": "user", "content": "αΆα΅α΅ α¨α ααͺα« ααα«α΅ α₯αα΅αα"}] | |
| llama3_am(messages, max_new_tokens=128, repetition_penalty=1.1, return_full_text=False) | |
| ``` | |
| Output: | |
| ```python | |
| [{'generated_text': '1. αα₯α 2. ααααͺα« 3. αα'}] | |
| ``` |